Test Report: KVM_Linux_crio 18213

                    
                      d7784bd4e07917c4cb201a553088c10d6998a83a:2024-03-15:33580
                    
                

Test fail (32/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 161.74
53 TestAddons/StoppedEnableDisable 154.47
172 TestMultiControlPlane/serial/StopSecondaryNode 141.94
174 TestMultiControlPlane/serial/RestartSecondaryNode 61.69
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 360.76
177 TestMultiControlPlane/serial/DeleteSecondaryNode 19.91
179 TestMultiControlPlane/serial/StopCluster 172.88
180 TestMultiControlPlane/serial/RestartCluster 457.38
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.29
182 TestMultiControlPlane/serial/AddSecondaryNode 58.36
239 TestMultiNode/serial/RestartKeepsNodes 306.77
241 TestMultiNode/serial/StopMultiNode 141.58
248 TestPreload 244.29
256 TestKubernetesUpgrade 396.37
299 TestStartStop/group/old-k8s-version/serial/FirstStart 302.1
306 TestStartStop/group/embed-certs/serial/Stop 139.17
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.41
312 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 97.47
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
320 TestStartStop/group/no-preload/serial/Stop 138.98
323 TestStartStop/group/old-k8s-version/serial/SecondStart 729.7
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.39
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.19
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.41
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.48
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 371.61
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 544.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 303.19
333 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 128.58
x
+
TestAddons/parallel/Ingress (161.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-480837 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-480837 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-480837 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6358bc9d-0837-4e49-ab72-c24ef4add6c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6358bc9d-0837-4e49-ab72-c24ef4add6c7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.003902214s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-480837 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.048339074s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-480837 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.159
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-480837 addons disable ingress-dns --alsologtostderr -v=1: (1.500373411s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-480837 addons disable ingress --alsologtostderr -v=1: (7.928376267s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-480837 -n addons-480837
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-480837 logs -n 25: (1.38911813s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-168992                                                                     | download-only-168992 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:57 UTC |
	| delete  | -p download-only-502138                                                                     | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:57 UTC |
	| delete  | -p download-only-396128                                                                     | download-only-396128 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:57 UTC |
	| delete  | -p download-only-168992                                                                     | download-only-168992 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-455686 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC |                     |
	|         | binary-mirror-455686                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42939                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-455686                                                                     | binary-mirror-455686 | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC |                     |
	|         | addons-480837                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC |                     |
	|         | addons-480837                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-480837 --wait=true                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 05:57 UTC | 15 Mar 24 05:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | addons-480837                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-480837 ssh cat                                                                       | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | /opt/local-path-provisioner/pvc-32155ee8-605a-4b28-a7c9-57ea10158efb_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-480837 addons disable                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-480837 ip                                                                            | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	| addons  | addons-480837 addons disable                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-480837 addons disable                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-480837 addons                                                                        | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | addons-480837                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:00 UTC |
	|         | -p addons-480837                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-480837 ssh curl -s                                                                   | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-480837 addons                                                                        | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:00 UTC | 15 Mar 24 06:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:01 UTC | 15 Mar 24 06:01 UTC |
	|         | -p addons-480837                                                                            |                      |         |         |                     |                     |
	| addons  | addons-480837 addons                                                                        | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:01 UTC | 15 Mar 24 06:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-480837 ip                                                                            | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:02 UTC | 15 Mar 24 06:02 UTC |
	| addons  | addons-480837 addons disable                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:02 UTC | 15 Mar 24 06:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-480837 addons disable                                                                | addons-480837        | jenkins | v1.32.0 | 15 Mar 24 06:02 UTC | 15 Mar 24 06:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 05:57:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 05:57:27.809684   16964 out.go:291] Setting OutFile to fd 1 ...
	I0315 05:57:27.809792   16964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:57:27.809800   16964 out.go:304] Setting ErrFile to fd 2...
	I0315 05:57:27.809804   16964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:57:27.809998   16964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 05:57:27.810568   16964 out.go:298] Setting JSON to false
	I0315 05:57:27.811358   16964 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2344,"bootTime":1710479904,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 05:57:27.811423   16964 start.go:139] virtualization: kvm guest
	I0315 05:57:27.813569   16964 out.go:177] * [addons-480837] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 05:57:27.815519   16964 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 05:57:27.815491   16964 notify.go:220] Checking for updates...
	I0315 05:57:27.816996   16964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 05:57:27.818437   16964 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 05:57:27.819578   16964 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:57:27.820862   16964 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 05:57:27.822525   16964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 05:57:27.824099   16964 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 05:57:27.854525   16964 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 05:57:27.855637   16964 start.go:297] selected driver: kvm2
	I0315 05:57:27.855667   16964 start.go:901] validating driver "kvm2" against <nil>
	I0315 05:57:27.855677   16964 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 05:57:27.856482   16964 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:57:27.856565   16964 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 05:57:27.870574   16964 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 05:57:27.870629   16964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 05:57:27.870887   16964 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 05:57:27.870967   16964 cni.go:84] Creating CNI manager for ""
	I0315 05:57:27.870983   16964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:57:27.870997   16964 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 05:57:27.871062   16964 start.go:340] cluster config:
	{Name:addons-480837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-480837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 05:57:27.871184   16964 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:57:27.873087   16964 out.go:177] * Starting "addons-480837" primary control-plane node in "addons-480837" cluster
	I0315 05:57:27.874608   16964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 05:57:27.874643   16964 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 05:57:27.874655   16964 cache.go:56] Caching tarball of preloaded images
	I0315 05:57:27.874737   16964 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 05:57:27.874748   16964 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 05:57:27.875079   16964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/config.json ...
	I0315 05:57:27.875102   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/config.json: {Name:mk025ee4b74a6625682f3cea2f849372f41906be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:57:27.875253   16964 start.go:360] acquireMachinesLock for addons-480837: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 05:57:27.875297   16964 start.go:364] duration metric: took 29.832µs to acquireMachinesLock for "addons-480837"
	I0315 05:57:27.875313   16964 start.go:93] Provisioning new machine with config: &{Name:addons-480837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-480837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 05:57:27.875373   16964 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 05:57:27.877167   16964 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0315 05:57:27.877291   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:57:27.877338   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:57:27.891270   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0315 05:57:27.891648   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:57:27.892141   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:57:27.892161   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:57:27.892550   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:57:27.892767   16964 main.go:141] libmachine: (addons-480837) Calling .GetMachineName
	I0315 05:57:27.892923   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:27.893048   16964 start.go:159] libmachine.API.Create for "addons-480837" (driver="kvm2")
	I0315 05:57:27.893072   16964 client.go:168] LocalClient.Create starting
	I0315 05:57:27.893107   16964 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 05:57:28.125159   16964 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 05:57:28.188912   16964 main.go:141] libmachine: Running pre-create checks...
	I0315 05:57:28.188934   16964 main.go:141] libmachine: (addons-480837) Calling .PreCreateCheck
	I0315 05:57:28.189495   16964 main.go:141] libmachine: (addons-480837) Calling .GetConfigRaw
	I0315 05:57:28.189888   16964 main.go:141] libmachine: Creating machine...
	I0315 05:57:28.189901   16964 main.go:141] libmachine: (addons-480837) Calling .Create
	I0315 05:57:28.190047   16964 main.go:141] libmachine: (addons-480837) Creating KVM machine...
	I0315 05:57:28.191327   16964 main.go:141] libmachine: (addons-480837) DBG | found existing default KVM network
	I0315 05:57:28.192166   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:28.192013   16986 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0315 05:57:28.192197   16964 main.go:141] libmachine: (addons-480837) DBG | created network xml: 
	I0315 05:57:28.192207   16964 main.go:141] libmachine: (addons-480837) DBG | <network>
	I0315 05:57:28.192214   16964 main.go:141] libmachine: (addons-480837) DBG |   <name>mk-addons-480837</name>
	I0315 05:57:28.192226   16964 main.go:141] libmachine: (addons-480837) DBG |   <dns enable='no'/>
	I0315 05:57:28.192238   16964 main.go:141] libmachine: (addons-480837) DBG |   
	I0315 05:57:28.192251   16964 main.go:141] libmachine: (addons-480837) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 05:57:28.192258   16964 main.go:141] libmachine: (addons-480837) DBG |     <dhcp>
	I0315 05:57:28.192267   16964 main.go:141] libmachine: (addons-480837) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 05:57:28.192278   16964 main.go:141] libmachine: (addons-480837) DBG |     </dhcp>
	I0315 05:57:28.192289   16964 main.go:141] libmachine: (addons-480837) DBG |   </ip>
	I0315 05:57:28.192298   16964 main.go:141] libmachine: (addons-480837) DBG |   
	I0315 05:57:28.192339   16964 main.go:141] libmachine: (addons-480837) DBG | </network>
	I0315 05:57:28.192378   16964 main.go:141] libmachine: (addons-480837) DBG | 
	I0315 05:57:28.197898   16964 main.go:141] libmachine: (addons-480837) DBG | trying to create private KVM network mk-addons-480837 192.168.39.0/24...
	I0315 05:57:28.263514   16964 main.go:141] libmachine: (addons-480837) DBG | private KVM network mk-addons-480837 192.168.39.0/24 created
	I0315 05:57:28.263564   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:28.263477   16986 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:57:28.263592   16964 main.go:141] libmachine: (addons-480837) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837 ...
	I0315 05:57:28.263611   16964 main.go:141] libmachine: (addons-480837) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 05:57:28.263640   16964 main.go:141] libmachine: (addons-480837) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 05:57:28.519627   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:28.519520   16986 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa...
	I0315 05:57:28.581561   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:28.581419   16986 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/addons-480837.rawdisk...
	I0315 05:57:28.581589   16964 main.go:141] libmachine: (addons-480837) DBG | Writing magic tar header
	I0315 05:57:28.581599   16964 main.go:141] libmachine: (addons-480837) DBG | Writing SSH key tar header
	I0315 05:57:28.581607   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:28.581560   16986 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837 ...
	I0315 05:57:28.581715   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837
	I0315 05:57:28.581767   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 05:57:28.581779   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837 (perms=drwx------)
	I0315 05:57:28.581792   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 05:57:28.581799   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 05:57:28.581808   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 05:57:28.581820   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 05:57:28.581833   16964 main.go:141] libmachine: (addons-480837) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 05:57:28.581847   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:57:28.581859   16964 main.go:141] libmachine: (addons-480837) Creating domain...
	I0315 05:57:28.581869   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 05:57:28.581879   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 05:57:28.581884   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home/jenkins
	I0315 05:57:28.581891   16964 main.go:141] libmachine: (addons-480837) DBG | Checking permissions on dir: /home
	I0315 05:57:28.581896   16964 main.go:141] libmachine: (addons-480837) DBG | Skipping /home - not owner
	I0315 05:57:28.582811   16964 main.go:141] libmachine: (addons-480837) define libvirt domain using xml: 
	I0315 05:57:28.582849   16964 main.go:141] libmachine: (addons-480837) <domain type='kvm'>
	I0315 05:57:28.582865   16964 main.go:141] libmachine: (addons-480837)   <name>addons-480837</name>
	I0315 05:57:28.582873   16964 main.go:141] libmachine: (addons-480837)   <memory unit='MiB'>4000</memory>
	I0315 05:57:28.582882   16964 main.go:141] libmachine: (addons-480837)   <vcpu>2</vcpu>
	I0315 05:57:28.582886   16964 main.go:141] libmachine: (addons-480837)   <features>
	I0315 05:57:28.582891   16964 main.go:141] libmachine: (addons-480837)     <acpi/>
	I0315 05:57:28.582898   16964 main.go:141] libmachine: (addons-480837)     <apic/>
	I0315 05:57:28.582903   16964 main.go:141] libmachine: (addons-480837)     <pae/>
	I0315 05:57:28.582910   16964 main.go:141] libmachine: (addons-480837)     
	I0315 05:57:28.582915   16964 main.go:141] libmachine: (addons-480837)   </features>
	I0315 05:57:28.582926   16964 main.go:141] libmachine: (addons-480837)   <cpu mode='host-passthrough'>
	I0315 05:57:28.582980   16964 main.go:141] libmachine: (addons-480837)   
	I0315 05:57:28.583009   16964 main.go:141] libmachine: (addons-480837)   </cpu>
	I0315 05:57:28.583019   16964 main.go:141] libmachine: (addons-480837)   <os>
	I0315 05:57:28.583029   16964 main.go:141] libmachine: (addons-480837)     <type>hvm</type>
	I0315 05:57:28.583038   16964 main.go:141] libmachine: (addons-480837)     <boot dev='cdrom'/>
	I0315 05:57:28.583048   16964 main.go:141] libmachine: (addons-480837)     <boot dev='hd'/>
	I0315 05:57:28.583055   16964 main.go:141] libmachine: (addons-480837)     <bootmenu enable='no'/>
	I0315 05:57:28.583065   16964 main.go:141] libmachine: (addons-480837)   </os>
	I0315 05:57:28.583082   16964 main.go:141] libmachine: (addons-480837)   <devices>
	I0315 05:57:28.583097   16964 main.go:141] libmachine: (addons-480837)     <disk type='file' device='cdrom'>
	I0315 05:57:28.583113   16964 main.go:141] libmachine: (addons-480837)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/boot2docker.iso'/>
	I0315 05:57:28.583121   16964 main.go:141] libmachine: (addons-480837)       <target dev='hdc' bus='scsi'/>
	I0315 05:57:28.583126   16964 main.go:141] libmachine: (addons-480837)       <readonly/>
	I0315 05:57:28.583131   16964 main.go:141] libmachine: (addons-480837)     </disk>
	I0315 05:57:28.583139   16964 main.go:141] libmachine: (addons-480837)     <disk type='file' device='disk'>
	I0315 05:57:28.583148   16964 main.go:141] libmachine: (addons-480837)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 05:57:28.583159   16964 main.go:141] libmachine: (addons-480837)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/addons-480837.rawdisk'/>
	I0315 05:57:28.583165   16964 main.go:141] libmachine: (addons-480837)       <target dev='hda' bus='virtio'/>
	I0315 05:57:28.583170   16964 main.go:141] libmachine: (addons-480837)     </disk>
	I0315 05:57:28.583178   16964 main.go:141] libmachine: (addons-480837)     <interface type='network'>
	I0315 05:57:28.583184   16964 main.go:141] libmachine: (addons-480837)       <source network='mk-addons-480837'/>
	I0315 05:57:28.583191   16964 main.go:141] libmachine: (addons-480837)       <model type='virtio'/>
	I0315 05:57:28.583196   16964 main.go:141] libmachine: (addons-480837)     </interface>
	I0315 05:57:28.583203   16964 main.go:141] libmachine: (addons-480837)     <interface type='network'>
	I0315 05:57:28.583217   16964 main.go:141] libmachine: (addons-480837)       <source network='default'/>
	I0315 05:57:28.583227   16964 main.go:141] libmachine: (addons-480837)       <model type='virtio'/>
	I0315 05:57:28.583240   16964 main.go:141] libmachine: (addons-480837)     </interface>
	I0315 05:57:28.583253   16964 main.go:141] libmachine: (addons-480837)     <serial type='pty'>
	I0315 05:57:28.583266   16964 main.go:141] libmachine: (addons-480837)       <target port='0'/>
	I0315 05:57:28.583276   16964 main.go:141] libmachine: (addons-480837)     </serial>
	I0315 05:57:28.583293   16964 main.go:141] libmachine: (addons-480837)     <console type='pty'>
	I0315 05:57:28.583306   16964 main.go:141] libmachine: (addons-480837)       <target type='serial' port='0'/>
	I0315 05:57:28.583315   16964 main.go:141] libmachine: (addons-480837)     </console>
	I0315 05:57:28.583321   16964 main.go:141] libmachine: (addons-480837)     <rng model='virtio'>
	I0315 05:57:28.583335   16964 main.go:141] libmachine: (addons-480837)       <backend model='random'>/dev/random</backend>
	I0315 05:57:28.583343   16964 main.go:141] libmachine: (addons-480837)     </rng>
	I0315 05:57:28.583354   16964 main.go:141] libmachine: (addons-480837)     
	I0315 05:57:28.583370   16964 main.go:141] libmachine: (addons-480837)     
	I0315 05:57:28.583382   16964 main.go:141] libmachine: (addons-480837)   </devices>
	I0315 05:57:28.583393   16964 main.go:141] libmachine: (addons-480837) </domain>
	I0315 05:57:28.583402   16964 main.go:141] libmachine: (addons-480837) 
	I0315 05:57:28.589881   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:12:ed:60 in network default
	I0315 05:57:28.590388   16964 main.go:141] libmachine: (addons-480837) Ensuring networks are active...
	I0315 05:57:28.590430   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:28.591042   16964 main.go:141] libmachine: (addons-480837) Ensuring network default is active
	I0315 05:57:28.591357   16964 main.go:141] libmachine: (addons-480837) Ensuring network mk-addons-480837 is active
	I0315 05:57:28.593113   16964 main.go:141] libmachine: (addons-480837) Getting domain xml...
	I0315 05:57:28.593705   16964 main.go:141] libmachine: (addons-480837) Creating domain...
	I0315 05:57:29.969833   16964 main.go:141] libmachine: (addons-480837) Waiting to get IP...
	I0315 05:57:29.970536   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:29.970896   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:29.970923   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:29.970864   16986 retry.go:31] will retry after 247.42869ms: waiting for machine to come up
	I0315 05:57:30.220330   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:30.220710   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:30.220735   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:30.220653   16986 retry.go:31] will retry after 257.374296ms: waiting for machine to come up
	I0315 05:57:30.480249   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:30.480714   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:30.480748   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:30.480680   16986 retry.go:31] will retry after 339.270664ms: waiting for machine to come up
	I0315 05:57:30.821146   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:30.821590   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:30.821620   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:30.821553   16986 retry.go:31] will retry after 404.97036ms: waiting for machine to come up
	I0315 05:57:31.228270   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:31.228767   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:31.228800   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:31.228718   16986 retry.go:31] will retry after 626.149247ms: waiting for machine to come up
	I0315 05:57:31.856339   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:31.856773   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:31.856807   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:31.856722   16986 retry.go:31] will retry after 940.31085ms: waiting for machine to come up
	I0315 05:57:32.798840   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:32.799361   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:32.799389   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:32.799311   16986 retry.go:31] will retry after 1.158138615s: waiting for machine to come up
	I0315 05:57:33.959366   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:33.959739   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:33.959776   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:33.959689   16986 retry.go:31] will retry after 1.262965832s: waiting for machine to come up
	I0315 05:57:35.224117   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:35.224556   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:35.224586   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:35.224513   16986 retry.go:31] will retry after 1.205084098s: waiting for machine to come up
	I0315 05:57:36.431921   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:36.432387   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:36.432413   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:36.432348   16986 retry.go:31] will retry after 2.184987379s: waiting for machine to come up
	I0315 05:57:38.618668   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:38.619077   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:38.619105   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:38.619032   16986 retry.go:31] will retry after 2.905932297s: waiting for machine to come up
	I0315 05:57:41.528062   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:41.528541   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:41.528566   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:41.528485   16986 retry.go:31] will retry after 2.640677117s: waiting for machine to come up
	I0315 05:57:44.170581   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:44.170947   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:44.170974   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:44.170895   16986 retry.go:31] will retry after 3.632553809s: waiting for machine to come up
	I0315 05:57:47.807725   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:47.808285   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find current IP address of domain addons-480837 in network mk-addons-480837
	I0315 05:57:47.808310   16964 main.go:141] libmachine: (addons-480837) DBG | I0315 05:57:47.808247   16986 retry.go:31] will retry after 5.29231276s: waiting for machine to come up
	I0315 05:57:53.102401   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.102895   16964 main.go:141] libmachine: (addons-480837) Found IP for machine: 192.168.39.159
	I0315 05:57:53.102936   16964 main.go:141] libmachine: (addons-480837) Reserving static IP address...
	I0315 05:57:53.102949   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has current primary IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.103338   16964 main.go:141] libmachine: (addons-480837) DBG | unable to find host DHCP lease matching {name: "addons-480837", mac: "52:54:00:9e:1d:7e", ip: "192.168.39.159"} in network mk-addons-480837
	I0315 05:57:53.176338   16964 main.go:141] libmachine: (addons-480837) DBG | Getting to WaitForSSH function...
	I0315 05:57:53.176419   16964 main.go:141] libmachine: (addons-480837) Reserved static IP address: 192.168.39.159
	I0315 05:57:53.176502   16964 main.go:141] libmachine: (addons-480837) Waiting for SSH to be available...
	I0315 05:57:53.179072   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.179690   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.179718   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.179970   16964 main.go:141] libmachine: (addons-480837) DBG | Using SSH client type: external
	I0315 05:57:53.179996   16964 main.go:141] libmachine: (addons-480837) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa (-rw-------)
	I0315 05:57:53.180024   16964 main.go:141] libmachine: (addons-480837) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 05:57:53.180039   16964 main.go:141] libmachine: (addons-480837) DBG | About to run SSH command:
	I0315 05:57:53.180051   16964 main.go:141] libmachine: (addons-480837) DBG | exit 0
	I0315 05:57:53.317011   16964 main.go:141] libmachine: (addons-480837) DBG | SSH cmd err, output: <nil>: 
	I0315 05:57:53.317294   16964 main.go:141] libmachine: (addons-480837) KVM machine creation complete!
	I0315 05:57:53.317610   16964 main.go:141] libmachine: (addons-480837) Calling .GetConfigRaw
	I0315 05:57:53.318161   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:53.318368   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:53.318548   16964 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 05:57:53.318562   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:57:53.319630   16964 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 05:57:53.319646   16964 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 05:57:53.319651   16964 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 05:57:53.319656   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:53.321762   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.322123   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.322147   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.322275   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:53.322426   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.322617   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.322778   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:53.322932   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:53.323163   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:53.323174   16964 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 05:57:53.432558   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 05:57:53.432581   16964 main.go:141] libmachine: Detecting the provisioner...
	I0315 05:57:53.432588   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:53.435304   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.435661   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.435690   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.435800   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:53.436026   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.436181   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.436308   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:53.436443   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:53.436667   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:53.436687   16964 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 05:57:53.550020   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 05:57:53.550092   16964 main.go:141] libmachine: found compatible host: buildroot
	I0315 05:57:53.550101   16964 main.go:141] libmachine: Provisioning with buildroot...
	I0315 05:57:53.550111   16964 main.go:141] libmachine: (addons-480837) Calling .GetMachineName
	I0315 05:57:53.550365   16964 buildroot.go:166] provisioning hostname "addons-480837"
	I0315 05:57:53.550391   16964 main.go:141] libmachine: (addons-480837) Calling .GetMachineName
	I0315 05:57:53.550595   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:53.553784   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.554061   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.554101   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.554239   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:53.554449   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.554639   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.554810   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:53.554988   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:53.555152   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:53.555165   16964 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-480837 && echo "addons-480837" | sudo tee /etc/hostname
	I0315 05:57:53.684048   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-480837
	
	I0315 05:57:53.684091   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:53.687128   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.687494   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.687527   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.687681   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:53.687856   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.688026   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:53.688163   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:53.688360   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:53.688596   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:53.688615   16964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-480837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-480837/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-480837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 05:57:53.811191   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 05:57:53.811224   16964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 05:57:53.811282   16964 buildroot.go:174] setting up certificates
	I0315 05:57:53.811301   16964 provision.go:84] configureAuth start
	I0315 05:57:53.811321   16964 main.go:141] libmachine: (addons-480837) Calling .GetMachineName
	I0315 05:57:53.811612   16964 main.go:141] libmachine: (addons-480837) Calling .GetIP
	I0315 05:57:53.814492   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.814889   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.814918   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.815052   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:53.817221   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.817656   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:53.817702   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:53.817750   16964 provision.go:143] copyHostCerts
	I0315 05:57:53.817822   16964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 05:57:53.817967   16964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 05:57:53.818055   16964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 05:57:53.818140   16964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.addons-480837 san=[127.0.0.1 192.168.39.159 addons-480837 localhost minikube]
	I0315 05:57:54.011521   16964 provision.go:177] copyRemoteCerts
	I0315 05:57:54.011588   16964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 05:57:54.011610   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.014619   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.015094   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.015131   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.015366   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.015565   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.015739   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.015859   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:57:54.104101   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 05:57:54.133943   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 05:57:54.162521   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 05:57:54.188918   16964 provision.go:87] duration metric: took 377.589285ms to configureAuth
	I0315 05:57:54.188941   16964 buildroot.go:189] setting minikube options for container-runtime
	I0315 05:57:54.189153   16964 config.go:182] Loaded profile config "addons-480837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 05:57:54.189234   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.192086   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.192450   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.192494   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.192679   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.192905   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.193081   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.193213   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.193361   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:54.193608   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:54.193625   16964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 05:57:54.477278   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 05:57:54.477305   16964 main.go:141] libmachine: Checking connection to Docker...
	I0315 05:57:54.477313   16964 main.go:141] libmachine: (addons-480837) Calling .GetURL
	I0315 05:57:54.478842   16964 main.go:141] libmachine: (addons-480837) DBG | Using libvirt version 6000000
	I0315 05:57:54.481515   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.482068   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.482097   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.482286   16964 main.go:141] libmachine: Docker is up and running!
	I0315 05:57:54.482302   16964 main.go:141] libmachine: Reticulating splines...
	I0315 05:57:54.482309   16964 client.go:171] duration metric: took 26.589227871s to LocalClient.Create
	I0315 05:57:54.482329   16964 start.go:167] duration metric: took 26.589281877s to libmachine.API.Create "addons-480837"
	I0315 05:57:54.482347   16964 start.go:293] postStartSetup for "addons-480837" (driver="kvm2")
	I0315 05:57:54.482357   16964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 05:57:54.482370   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:54.482622   16964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 05:57:54.482653   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.485753   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.486152   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.486177   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.486248   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.486456   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.486645   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.486804   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:57:54.571604   16964 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 05:57:54.576457   16964 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 05:57:54.576505   16964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 05:57:54.576575   16964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 05:57:54.576625   16964 start.go:296] duration metric: took 94.272851ms for postStartSetup
	I0315 05:57:54.576665   16964 main.go:141] libmachine: (addons-480837) Calling .GetConfigRaw
	I0315 05:57:54.577252   16964 main.go:141] libmachine: (addons-480837) Calling .GetIP
	I0315 05:57:54.579795   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.580136   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.580158   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.580386   16964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/config.json ...
	I0315 05:57:54.580623   16964 start.go:128] duration metric: took 26.705241562s to createHost
	I0315 05:57:54.580646   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.583180   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.583532   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.583565   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.583709   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.583931   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.584083   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.584206   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.584388   16964 main.go:141] libmachine: Using SSH client type: native
	I0315 05:57:54.584590   16964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0315 05:57:54.584604   16964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 05:57:54.697656   16964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710482274.648854563
	
	I0315 05:57:54.697675   16964 fix.go:216] guest clock: 1710482274.648854563
	I0315 05:57:54.697683   16964 fix.go:229] Guest: 2024-03-15 05:57:54.648854563 +0000 UTC Remote: 2024-03-15 05:57:54.58063677 +0000 UTC m=+26.815614782 (delta=68.217793ms)
	I0315 05:57:54.697715   16964 fix.go:200] guest clock delta is within tolerance: 68.217793ms
	I0315 05:57:54.697722   16964 start.go:83] releasing machines lock for "addons-480837", held for 26.822416122s
	I0315 05:57:54.697747   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:54.698044   16964 main.go:141] libmachine: (addons-480837) Calling .GetIP
	I0315 05:57:54.700647   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.701037   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.701065   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.701224   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:54.701713   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:54.701867   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:57:54.701953   16964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 05:57:54.702009   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.702067   16964 ssh_runner.go:195] Run: cat /version.json
	I0315 05:57:54.702084   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:57:54.704760   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.705081   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.705109   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.705127   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.705262   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.705446   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.705528   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:54.705563   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:54.705580   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.705713   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:57:54.705792   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:57:54.705872   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:57:54.705991   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:57:54.706127   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:57:54.785933   16964 ssh_runner.go:195] Run: systemctl --version
	I0315 05:57:54.826264   16964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 05:57:54.988505   16964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 05:57:54.995942   16964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 05:57:54.996019   16964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 05:57:55.013728   16964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 05:57:55.013756   16964 start.go:494] detecting cgroup driver to use...
	I0315 05:57:55.013822   16964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 05:57:55.029960   16964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 05:57:55.044631   16964 docker.go:217] disabling cri-docker service (if available) ...
	I0315 05:57:55.044687   16964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 05:57:55.058549   16964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 05:57:55.072482   16964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 05:57:55.184201   16964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 05:57:55.319599   16964 docker.go:233] disabling docker service ...
	I0315 05:57:55.319666   16964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 05:57:55.334985   16964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 05:57:55.350248   16964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 05:57:55.481588   16964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 05:57:55.602408   16964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 05:57:55.617451   16964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 05:57:55.636878   16964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 05:57:55.636935   16964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 05:57:55.648190   16964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 05:57:55.648275   16964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 05:57:55.659408   16964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 05:57:55.670711   16964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 05:57:55.682232   16964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 05:57:55.694600   16964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 05:57:55.705159   16964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 05:57:55.705223   16964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 05:57:55.719020   16964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 05:57:55.729054   16964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 05:57:55.844629   16964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 05:57:55.986417   16964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 05:57:55.986503   16964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 05:57:55.991803   16964 start.go:562] Will wait 60s for crictl version
	I0315 05:57:55.991874   16964 ssh_runner.go:195] Run: which crictl
	I0315 05:57:55.996314   16964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 05:57:56.033812   16964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 05:57:56.033923   16964 ssh_runner.go:195] Run: crio --version
	I0315 05:57:56.063628   16964 ssh_runner.go:195] Run: crio --version
	I0315 05:57:56.097968   16964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 05:57:56.099495   16964 main.go:141] libmachine: (addons-480837) Calling .GetIP
	I0315 05:57:56.102128   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:56.102462   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:57:56.102491   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:57:56.102693   16964 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 05:57:56.107237   16964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 05:57:56.120185   16964 kubeadm.go:877] updating cluster {Name:addons-480837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-480837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 05:57:56.120282   16964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 05:57:56.120320   16964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 05:57:56.153116   16964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 05:57:56.153182   16964 ssh_runner.go:195] Run: which lz4
	I0315 05:57:56.157328   16964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 05:57:56.161430   16964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 05:57:56.161451   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 05:57:57.782440   16964 crio.go:444] duration metric: took 1.625133418s to copy over tarball
	I0315 05:57:57.782518   16964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 05:58:00.467442   16964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.684898401s)
	I0315 05:58:00.467472   16964 crio.go:451] duration metric: took 2.685003606s to extract the tarball
	I0315 05:58:00.467483   16964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 05:58:00.511291   16964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 05:58:00.554117   16964 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 05:58:00.554139   16964 cache_images.go:84] Images are preloaded, skipping loading
	I0315 05:58:00.554147   16964 kubeadm.go:928] updating node { 192.168.39.159 8443 v1.28.4 crio true true} ...
	I0315 05:58:00.554253   16964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-480837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-480837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 05:58:00.554319   16964 ssh_runner.go:195] Run: crio config
	I0315 05:58:00.605372   16964 cni.go:84] Creating CNI manager for ""
	I0315 05:58:00.605400   16964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:58:00.605408   16964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 05:58:00.605427   16964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-480837 NodeName:addons-480837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 05:58:00.605545   16964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-480837"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 05:58:00.605606   16964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 05:58:00.616645   16964 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 05:58:00.616716   16964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 05:58:00.626791   16964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 05:58:00.643801   16964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 05:58:00.660772   16964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0315 05:58:00.678870   16964 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I0315 05:58:00.683103   16964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 05:58:00.697380   16964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 05:58:00.820820   16964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 05:58:00.839555   16964 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837 for IP: 192.168.39.159
	I0315 05:58:00.839578   16964 certs.go:194] generating shared ca certs ...
	I0315 05:58:00.839598   16964 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:00.839743   16964 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 05:58:00.916097   16964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt ...
	I0315 05:58:00.916125   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt: {Name:mk2f70d28040143157995b318955c7c07e1733f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:00.916277   16964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key ...
	I0315 05:58:00.916289   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key: {Name:mk5fff95c6a965369b95aff329617ade07139d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:00.916360   16964 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 05:58:01.035240   16964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt ...
	I0315 05:58:01.035269   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt: {Name:mk99c6833c27a550c22ac05f9fd7a5d777e54c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.035414   16964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key ...
	I0315 05:58:01.035426   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key: {Name:mkb37a309b3671182b3fa4a7466aab1a0c973390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.035485   16964 certs.go:256] generating profile certs ...
	I0315 05:58:01.035539   16964 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.key
	I0315 05:58:01.035553   16964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt with IP's: []
	I0315 05:58:01.108884   16964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt ...
	I0315 05:58:01.108911   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: {Name:mka0f6c8c479d9d9e82c1a7d76f62ae71ccce11b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.109062   16964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.key ...
	I0315 05:58:01.109074   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.key: {Name:mk73ebfe5a5abc0d2102c6693f4c49c486f1a91e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.109138   16964 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key.3ff4ecbb
	I0315 05:58:01.109154   16964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt.3ff4ecbb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159]
	I0315 05:58:01.233815   16964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt.3ff4ecbb ...
	I0315 05:58:01.233845   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt.3ff4ecbb: {Name:mk8b7b660bf04112c169e32f3c3c6ead2c98c964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.233982   16964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key.3ff4ecbb ...
	I0315 05:58:01.233995   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key.3ff4ecbb: {Name:mk6b4c5d07edd3fd0254a831cdbf2d8785bff100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.234062   16964 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt.3ff4ecbb -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt
	I0315 05:58:01.234142   16964 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key.3ff4ecbb -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key
	I0315 05:58:01.234196   16964 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.key
	I0315 05:58:01.234219   16964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.crt with IP's: []
	I0315 05:58:01.301628   16964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.crt ...
	I0315 05:58:01.301659   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.crt: {Name:mkb9e4c93751af8ff07a26dd633622e4818cbdfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.301809   16964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.key ...
	I0315 05:58:01.301822   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.key: {Name:mk2735b831ce49c830410a4790364e917876b35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:01.301987   16964 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 05:58:01.302024   16964 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 05:58:01.302043   16964 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 05:58:01.302066   16964 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 05:58:01.302700   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 05:58:01.333882   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 05:58:01.359559   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 05:58:01.383885   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 05:58:01.408002   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0315 05:58:01.432357   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 05:58:01.457427   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 05:58:01.482588   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 05:58:01.507770   16964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 05:58:01.550495   16964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 05:58:01.571299   16964 ssh_runner.go:195] Run: openssl version
	I0315 05:58:01.579595   16964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 05:58:01.591940   16964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 05:58:01.598090   16964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 05:58:01.598149   16964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 05:58:01.604937   16964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 05:58:01.617220   16964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 05:58:01.621412   16964 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 05:58:01.621454   16964 kubeadm.go:391] StartCluster: {Name:addons-480837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-480837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 05:58:01.621517   16964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 05:58:01.621557   16964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 05:58:01.657882   16964 cri.go:89] found id: ""
	I0315 05:58:01.657945   16964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 05:58:01.669445   16964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 05:58:01.680261   16964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 05:58:01.690991   16964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 05:58:01.691011   16964 kubeadm.go:156] found existing configuration files:
	
	I0315 05:58:01.691050   16964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 05:58:01.701105   16964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 05:58:01.701156   16964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 05:58:01.711243   16964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 05:58:01.720942   16964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 05:58:01.720992   16964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 05:58:01.730947   16964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 05:58:01.740416   16964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 05:58:01.740478   16964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 05:58:01.750830   16964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 05:58:01.761078   16964 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 05:58:01.761141   16964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 05:58:01.771863   16964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 05:58:01.955322   16964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 05:58:12.742893   16964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 05:58:12.742967   16964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 05:58:12.743084   16964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 05:58:12.743176   16964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 05:58:12.743302   16964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 05:58:12.743390   16964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 05:58:12.745042   16964 out.go:204]   - Generating certificates and keys ...
	I0315 05:58:12.745107   16964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 05:58:12.745175   16964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 05:58:12.745272   16964 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 05:58:12.745361   16964 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 05:58:12.745450   16964 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 05:58:12.745521   16964 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 05:58:12.745602   16964 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 05:58:12.745756   16964 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-480837 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0315 05:58:12.745804   16964 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 05:58:12.745900   16964 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-480837 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0315 05:58:12.745968   16964 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 05:58:12.746039   16964 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 05:58:12.746077   16964 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 05:58:12.746128   16964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 05:58:12.746170   16964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 05:58:12.746221   16964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 05:58:12.746285   16964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 05:58:12.746358   16964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 05:58:12.746457   16964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 05:58:12.746550   16964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 05:58:12.747906   16964 out.go:204]   - Booting up control plane ...
	I0315 05:58:12.747978   16964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 05:58:12.748053   16964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 05:58:12.748118   16964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 05:58:12.748231   16964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 05:58:12.748329   16964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 05:58:12.748388   16964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 05:58:12.748567   16964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 05:58:12.748636   16964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004174 seconds
	I0315 05:58:12.748756   16964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 05:58:12.748874   16964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 05:58:12.748939   16964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 05:58:12.749146   16964 kubeadm.go:309] [mark-control-plane] Marking the node addons-480837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 05:58:12.749228   16964 kubeadm.go:309] [bootstrap-token] Using token: gmazko.xzn8ulgux3c7gcmr
	I0315 05:58:12.751021   16964 out.go:204]   - Configuring RBAC rules ...
	I0315 05:58:12.751138   16964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 05:58:12.751246   16964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 05:58:12.751396   16964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 05:58:12.751580   16964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 05:58:12.751756   16964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 05:58:12.751867   16964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 05:58:12.752032   16964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 05:58:12.752092   16964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 05:58:12.752146   16964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 05:58:12.752153   16964 kubeadm.go:309] 
	I0315 05:58:12.752228   16964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 05:58:12.752239   16964 kubeadm.go:309] 
	I0315 05:58:12.752331   16964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 05:58:12.752341   16964 kubeadm.go:309] 
	I0315 05:58:12.752380   16964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 05:58:12.752471   16964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 05:58:12.752547   16964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 05:58:12.752557   16964 kubeadm.go:309] 
	I0315 05:58:12.752633   16964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 05:58:12.752642   16964 kubeadm.go:309] 
	I0315 05:58:12.752723   16964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 05:58:12.752732   16964 kubeadm.go:309] 
	I0315 05:58:12.752798   16964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 05:58:12.752907   16964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 05:58:12.753009   16964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 05:58:12.753020   16964 kubeadm.go:309] 
	I0315 05:58:12.753136   16964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 05:58:12.753225   16964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 05:58:12.753243   16964 kubeadm.go:309] 
	I0315 05:58:12.753376   16964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gmazko.xzn8ulgux3c7gcmr \
	I0315 05:58:12.753535   16964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 05:58:12.753581   16964 kubeadm.go:309] 	--control-plane 
	I0315 05:58:12.753592   16964 kubeadm.go:309] 
	I0315 05:58:12.753706   16964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 05:58:12.753718   16964 kubeadm.go:309] 
	I0315 05:58:12.753830   16964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gmazko.xzn8ulgux3c7gcmr \
	I0315 05:58:12.753976   16964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 05:58:12.753991   16964 cni.go:84] Creating CNI manager for ""
	I0315 05:58:12.754001   16964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:58:12.755556   16964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 05:58:12.756718   16964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 05:58:12.796040   16964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 05:58:12.852246   16964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 05:58:12.852304   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:12.852337   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-480837 minikube.k8s.io/updated_at=2024_03_15T05_58_12_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=addons-480837 minikube.k8s.io/primary=true
	I0315 05:58:12.874885   16964 ops.go:34] apiserver oom_adj: -16
	I0315 05:58:12.971166   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:13.471956   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:13.972129   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:14.471267   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:14.971223   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:15.471909   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:15.972251   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:16.471854   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:16.972155   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:17.471234   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:17.971704   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:18.471381   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:18.971676   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:19.472131   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:19.971954   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:20.471572   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:20.971213   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:21.472059   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:21.972165   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:22.472131   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:22.971449   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:23.472265   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:23.971790   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:24.471808   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:24.971532   16964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 05:58:25.114648   16964 kubeadm.go:1107] duration metric: took 12.262397757s to wait for elevateKubeSystemPrivileges
	W0315 05:58:25.114694   16964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 05:58:25.114705   16964 kubeadm.go:393] duration metric: took 23.493254034s to StartCluster
	I0315 05:58:25.114726   16964 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:25.114864   16964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 05:58:25.115451   16964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:58:25.115707   16964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 05:58:25.115721   16964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 05:58:25.117742   16964 out.go:177] * Verifying Kubernetes components...
	I0315 05:58:25.115780   16964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0315 05:58:25.115920   16964 config.go:182] Loaded profile config "addons-480837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 05:58:25.119380   16964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 05:58:25.119404   16964 addons.go:69] Setting yakd=true in profile "addons-480837"
	I0315 05:58:25.119429   16964 addons.go:69] Setting registry=true in profile "addons-480837"
	I0315 05:58:25.119433   16964 addons.go:69] Setting ingress-dns=true in profile "addons-480837"
	I0315 05:58:25.119452   16964 addons.go:234] Setting addon yakd=true in "addons-480837"
	I0315 05:58:25.119459   16964 addons.go:234] Setting addon registry=true in "addons-480837"
	I0315 05:58:25.119467   16964 addons.go:69] Setting default-storageclass=true in profile "addons-480837"
	I0315 05:58:25.119479   16964 addons.go:69] Setting storage-provisioner=true in profile "addons-480837"
	I0315 05:58:25.119493   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119500   16964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-480837"
	I0315 05:58:25.119509   16964 addons.go:69] Setting metrics-server=true in profile "addons-480837"
	I0315 05:58:25.119517   16964 addons.go:69] Setting gcp-auth=true in profile "addons-480837"
	I0315 05:58:25.119522   16964 addons.go:234] Setting addon storage-provisioner=true in "addons-480837"
	I0315 05:58:25.119531   16964 addons.go:234] Setting addon metrics-server=true in "addons-480837"
	I0315 05:58:25.119534   16964 mustload.go:65] Loading cluster: addons-480837
	I0315 05:58:25.119549   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119562   16964 addons.go:69] Setting helm-tiller=true in profile "addons-480837"
	I0315 05:58:25.119596   16964 addons.go:234] Setting addon helm-tiller=true in "addons-480837"
	I0315 05:58:25.119623   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119780   16964 config.go:182] Loaded profile config "addons-480837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 05:58:25.119887   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.119915   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.119969   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.119990   16964 addons.go:69] Setting ingress=true in profile "addons-480837"
	I0315 05:58:25.120085   16964 addons.go:234] Setting addon ingress=true in "addons-480837"
	I0315 05:58:25.119460   16964 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-480837"
	I0315 05:58:25.120175   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.120186   16964 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-480837"
	I0315 05:58:25.120192   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120216   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.119493   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119486   16964 addons.go:234] Setting addon ingress-dns=true in "addons-480837"
	I0315 05:58:25.120390   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.120576   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120593   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.119550   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.120646   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120723   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.120760   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120787   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120790   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.120812   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.120912   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.120949   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.119968   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.121050   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.119458   16964 addons.go:69] Setting volumesnapshots=true in profile "addons-480837"
	I0315 05:58:25.121126   16964 addons.go:234] Setting addon volumesnapshots=true in "addons-480837"
	I0315 05:58:25.121158   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119498   16964 addons.go:69] Setting inspektor-gadget=true in profile "addons-480837"
	I0315 05:58:25.121223   16964 addons.go:234] Setting addon inspektor-gadget=true in "addons-480837"
	I0315 05:58:25.121264   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.119980   16964 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-480837"
	I0315 05:58:25.119990   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.119999   16964 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-480837"
	I0315 05:58:25.120003   16964 addons.go:69] Setting cloud-spanner=true in profile "addons-480837"
	I0315 05:58:25.120043   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.121544   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.121563   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.121933   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.124385   16964 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-480837"
	I0315 05:58:25.124430   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.124516   16964 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-480837"
	I0315 05:58:25.124559   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.124837   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.124877   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.124886   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.124924   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.135072   16964 addons.go:234] Setting addon cloud-spanner=true in "addons-480837"
	I0315 05:58:25.135131   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.135528   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.135564   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.140988   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0315 05:58:25.141400   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.141907   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.141934   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.141980   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0315 05:58:25.142239   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0315 05:58:25.142395   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.142400   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.142548   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.143110   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.143554   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.143571   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.143933   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.144181   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0315 05:58:25.144415   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.144568   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.144621   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.144832   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.144968   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.145020   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.145428   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.145450   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.145723   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0315 05:58:25.145749   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.145762   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.146244   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.146467   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.146739   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.147239   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.147255   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.147601   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.148002   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.148035   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.148135   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.148162   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.149097   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.149122   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.150633   16964 addons.go:234] Setting addon default-storageclass=true in "addons-480837"
	I0315 05:58:25.150687   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.151025   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.151080   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.152147   16964 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-480837"
	I0315 05:58:25.152197   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:25.152555   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.152601   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.171371   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
	I0315 05:58:25.172086   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.172877   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0315 05:58:25.173318   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.173552   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.173568   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.173797   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.173815   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.174156   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.174209   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.174711   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.174737   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.174974   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0315 05:58:25.175652   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.175688   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.181988   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0315 05:58:25.182695   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.184377   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0315 05:58:25.184551   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I0315 05:58:25.184823   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.185643   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.185661   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.185887   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.186033   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.186504   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.186520   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.186650   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.186659   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.187133   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.187732   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.187768   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.187779   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.187966   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.188195   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.188242   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.188580   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.188595   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.189206   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.189246   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.189471   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0315 05:58:25.189928   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.190539   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.190685   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.190696   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.191073   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.191619   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.191640   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.192355   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.192386   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.193544   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0315 05:58:25.194630   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0315 05:58:25.194851   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0315 05:58:25.195021   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.195540   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.195557   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.195626   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.195977   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.196993   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.197019   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.197702   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.197726   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.198061   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.198144   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.198598   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.198614   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.198919   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.199065   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.199123   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.203406   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.205972   16964 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0315 05:58:25.204386   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0315 05:58:25.204678   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I0315 05:58:25.206490   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0315 05:58:25.209252   16964 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 05:58:25.208182   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.208638   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.209726   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
	I0315 05:58:25.210023   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.211858   16964 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 05:58:25.213305   16964 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 05:58:25.213328   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0315 05:58:25.213346   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.211077   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.213406   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.211185   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.213460   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.211218   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.211449   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.213546   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.213872   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.214026   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.214258   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.214751   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.214788   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.214980   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.215513   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.215538   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.215598   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.215611   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.216384   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.216397   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.218041   16964 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0315 05:58:25.217196   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.218091   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.219092   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.219484   16964 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 05:58:25.219498   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0315 05:58:25.219532   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.219550   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.219570   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.220126   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.220309   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.220492   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.220648   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.224835   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.225036   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I0315 05:58:25.225375   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0315 05:58:25.225514   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0315 05:58:25.225669   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.225838   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.225928   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.226112   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.226131   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.226247   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.226264   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.226300   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.226442   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.226453   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.226483   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.226503   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.226615   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.226668   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.226744   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.226795   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.227147   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.227165   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.227479   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.227505   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:25.227614   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.227626   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:25.228493   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.230616   16964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 05:58:25.229246   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.232113   16964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 05:58:25.232132   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 05:58:25.232149   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.234735   16964 out.go:177]   - Using image docker.io/registry:2.8.3
	I0315 05:58:25.236231   16964 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0315 05:58:25.237848   16964 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0315 05:58:25.237865   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0315 05:58:25.237884   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.235441   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0315 05:58:25.237008   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.238048   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.238075   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.238734   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.238946   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.239105   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.239353   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.240332   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.241194   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.241219   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.241784   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.242510   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.242933   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.242951   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.242979   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0315 05:58:25.243112   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.243589   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.244089   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.244107   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.244519   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.244701   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.244913   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0315 05:58:25.245377   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.245416   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.245446   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37941
	I0315 05:58:25.245582   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.245780   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.245998   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.246014   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.246065   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.246351   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.246428   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.246615   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.246803   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.246815   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.246877   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.248791   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0315 05:58:25.247326   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.247731   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.248234   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.251432   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0315 05:58:25.250545   16964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 05:58:25.250567   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.252632   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0315 05:58:25.252764   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 05:58:25.252774   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0315 05:58:25.255749   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0315 05:58:25.255779   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0315 05:58:25.255797   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.255816   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I0315 05:58:25.254586   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0315 05:58:25.254611   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.254685   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.256343   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.259392   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0315 05:58:25.260202   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42443
	I0315 05:58:25.260246   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.260255   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.261192   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.260320   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0315 05:58:25.261230   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.262451   16964 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0315 05:58:25.260834   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.261075   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0315 05:58:25.261476   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.261894   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.264119   16964 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0315 05:58:25.264141   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0315 05:58:25.264158   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.262557   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.261940   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.262744   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.262784   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0315 05:58:25.263037   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.264404   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.264885   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.265951   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0315 05:58:25.267758   16964 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0315 05:58:25.266155   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.266192   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.266269   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.267875   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.266783   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0315 05:58:25.267924   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.266816   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.266849   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.267998   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.266859   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.268048   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.267396   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.268275   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.269892   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0315 05:58:25.269913   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0315 05:58:25.268347   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.269928   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.269952   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.268799   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.268767   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.268805   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.269132   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.269984   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.269992   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.269386   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.269500   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.270039   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.270755   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.270764   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.270816   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.270824   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.270876   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
	I0315 05:58:25.270969   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.271016   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.271023   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.271194   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.271315   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:25.271624   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.271885   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:25.271905   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:25.272314   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.274101   16964 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0315 05:58:25.272646   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:25.273218   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.273817   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.273977   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.275465   16964 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0315 05:58:25.275482   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0315 05:58:25.275499   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.274942   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.275549   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.275571   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.275617   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.276920   16964 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0315 05:58:25.275251   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:25.274958   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.275738   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.278296   16964 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0315 05:58:25.278458   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0315 05:58:25.278482   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.278328   16964 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0315 05:58:25.279986   16964 out.go:177]   - Using image docker.io/busybox:stable
	I0315 05:58:25.278358   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.278369   16964 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0315 05:58:25.278746   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.278949   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.280056   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.280255   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.280356   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:25.281222   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.281434   16964 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0315 05:58:25.281464   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.281612   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.281890   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.282725   16964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 05:58:25.282738   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0315 05:58:25.282752   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.282781   16964 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0315 05:58:25.282792   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0315 05:58:25.282806   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.282820   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.282842   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.284440   16964 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 05:58:25.284450   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0315 05:58:25.282851   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.284523   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.286359   16964 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0315 05:58:25.287589   16964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 05:58:25.287605   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 05:58:25.287621   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:25.286578   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.287658   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.285276   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.286018   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.283460   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.286820   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.287709   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.287334   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.287727   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.287745   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.287856   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.287916   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.287971   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.288191   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.288195   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.288320   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.288366   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.288625   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:25.288896   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.289238   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.289263   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.289424   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.289609   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.289764   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.289879   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	W0315 05:58:25.290599   16964 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47990->192.168.39.159:22: read: connection reset by peer
	I0315 05:58:25.290629   16964 retry.go:31] will retry after 233.649086ms: ssh: handshake failed: read tcp 192.168.39.1:47990->192.168.39.159:22: read: connection reset by peer
	I0315 05:58:25.291041   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.291439   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:25.291473   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:25.291604   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:25.291793   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:25.291934   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:25.292051   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	W0315 05:58:25.525448   16964 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48002->192.168.39.159:22: read: connection reset by peer
	I0315 05:58:25.525474   16964 retry.go:31] will retry after 396.494719ms: ssh: handshake failed: read tcp 192.168.39.1:48002->192.168.39.159:22: read: connection reset by peer
	I0315 05:58:25.701083   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 05:58:25.718669   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0315 05:58:25.718701   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0315 05:58:25.722135   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 05:58:25.740859   16964 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0315 05:58:25.740887   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0315 05:58:25.815671   16964 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0315 05:58:25.815698   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0315 05:58:25.823494   16964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 05:58:25.823515   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0315 05:58:25.846832   16964 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0315 05:58:25.846864   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0315 05:58:25.884535   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 05:58:25.910810   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 05:58:25.911810   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 05:58:25.915694   16964 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 05:58:25.915721   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0315 05:58:25.916085   16964 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0315 05:58:25.916110   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0315 05:58:25.924511   16964 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0315 05:58:25.924541   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0315 05:58:25.954358   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0315 05:58:25.954380   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0315 05:58:25.955118   16964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 05:58:25.955138   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 05:58:25.991900   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0315 05:58:26.040855   16964 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0315 05:58:26.040885   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0315 05:58:26.051560   16964 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0315 05:58:26.051581   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0315 05:58:26.144758   16964 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0315 05:58:26.144781   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0315 05:58:26.159350   16964 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0315 05:58:26.159377   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0315 05:58:26.195034   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0315 05:58:26.195064   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0315 05:58:26.197516   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 05:58:26.201819   16964 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.086084359s)
	I0315 05:58:26.201895   16964 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.08248537s)
	I0315 05:58:26.201956   16964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 05:58:26.201979   16964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 05:58:26.207853   16964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 05:58:26.207874   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 05:58:26.324391   16964 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0315 05:58:26.324418   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0315 05:58:26.332847   16964 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0315 05:58:26.332866   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0315 05:58:26.403931   16964 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0315 05:58:26.403953   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0315 05:58:26.449291   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0315 05:58:26.524146   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 05:58:26.531840   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0315 05:58:26.531863   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0315 05:58:26.636732   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 05:58:26.717125   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0315 05:58:26.717150   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0315 05:58:26.740227   16964 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0315 05:58:26.740250   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0315 05:58:26.742342   16964 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0315 05:58:26.742363   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0315 05:58:26.886322   16964 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0315 05:58:26.886346   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0315 05:58:27.005168   16964 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 05:58:27.005189   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0315 05:58:27.062272   16964 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0315 05:58:27.062306   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0315 05:58:27.157353   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0315 05:58:27.347184   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0315 05:58:27.347209   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0315 05:58:27.347772   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 05:58:27.442305   16964 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0315 05:58:27.442327   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0315 05:58:27.531510   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0315 05:58:27.531534   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0315 05:58:27.692002   16964 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 05:58:27.692030   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0315 05:58:27.696045   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0315 05:58:27.696068   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0315 05:58:27.779921   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0315 05:58:27.779949   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0315 05:58:27.898443   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 05:58:28.052218   16964 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 05:58:28.052247   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0315 05:58:28.270315   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 05:58:31.154590   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.453472641s)
	I0315 05:58:31.154647   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:31.154662   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:31.154947   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:31.154966   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:31.154975   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:31.154983   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:31.155234   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:31.155251   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:31.155259   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:31.914070   16964 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0315 05:58:31.914110   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:31.917010   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:31.917379   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:31.917411   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:31.917616   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:31.917795   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:31.917970   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:31.918130   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:32.337145   16964 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0315 05:58:32.411949   16964 addons.go:234] Setting addon gcp-auth=true in "addons-480837"
	I0315 05:58:32.411997   16964 host.go:66] Checking if "addons-480837" exists ...
	I0315 05:58:32.412299   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:32.412324   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:32.441014   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0315 05:58:32.441443   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:32.441950   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:32.441976   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:32.442296   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:32.442784   16964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 05:58:32.442811   16964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 05:58:32.458890   16964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0315 05:58:32.459383   16964 main.go:141] libmachine: () Calling .GetVersion
	I0315 05:58:32.460323   16964 main.go:141] libmachine: Using API Version  1
	I0315 05:58:32.460345   16964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 05:58:32.460700   16964 main.go:141] libmachine: () Calling .GetMachineName
	I0315 05:58:32.460893   16964 main.go:141] libmachine: (addons-480837) Calling .GetState
	I0315 05:58:32.462623   16964 main.go:141] libmachine: (addons-480837) Calling .DriverName
	I0315 05:58:32.462863   16964 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0315 05:58:32.462886   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHHostname
	I0315 05:58:32.465910   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:32.466399   16964 main.go:141] libmachine: (addons-480837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:1d:7e", ip: ""} in network mk-addons-480837: {Iface:virbr1 ExpiryTime:2024-03-15 06:57:43 +0000 UTC Type:0 Mac:52:54:00:9e:1d:7e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:addons-480837 Clientid:01:52:54:00:9e:1d:7e}
	I0315 05:58:32.466430   16964 main.go:141] libmachine: (addons-480837) DBG | domain addons-480837 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:1d:7e in network mk-addons-480837
	I0315 05:58:32.466610   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHPort
	I0315 05:58:32.466794   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHKeyPath
	I0315 05:58:32.466964   16964 main.go:141] libmachine: (addons-480837) Calling .GetSSHUsername
	I0315 05:58:32.467119   16964 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/addons-480837/id_rsa Username:docker}
	I0315 05:58:34.853072   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.968497944s)
	I0315 05:58:34.853151   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.130983557s)
	I0315 05:58:34.853174   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853182   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853188   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853202   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853240   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.861312557s)
	I0315 05:58:34.853275   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853283   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853198   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.941367284s)
	I0315 05:58:34.853310   16964 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.65131632s)
	I0315 05:58:34.853320   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853325   16964 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 05:58:34.853335   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853338   16964 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.651366228s)
	I0315 05:58:34.853151   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.942312252s)
	I0315 05:58:34.853674   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.404350431s)
	I0315 05:58:34.853698   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853709   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853677   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853753   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853753   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.329572988s)
	I0315 05:58:34.853285   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.655742363s)
	I0315 05:58:34.853787   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853794   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853797   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.217041189s)
	I0315 05:58:34.853810   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853819   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853772   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853831   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.853899   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.696518437s)
	I0315 05:58:34.853915   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.853922   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.854058   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.506257894s)
	W0315 05:58:34.854084   16964 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 05:58:34.854127   16964 retry.go:31] will retry after 227.510295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 05:58:34.854208   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.955722152s)
	I0315 05:58:34.854224   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.854232   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.854311   16964 node_ready.go:35] waiting up to 6m0s for node "addons-480837" to be "Ready" ...
	I0315 05:58:34.857270   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857270   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857286   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857294   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857291   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857313   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857315   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857323   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857325   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857332   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857335   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857342   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857344   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857350   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857353   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857359   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857380   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857343   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857392   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857400   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857410   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857414   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857421   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857434   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857442   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857449   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857456   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857462   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857403   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857473   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857495   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857504   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857511   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857512   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857525   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857526   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857537   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857545   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857546   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857552   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857555   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857561   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857567   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857610   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857622   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857629   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857645   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857688   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857712   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857726   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857740   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857752   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857768   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.857800   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857895   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.857918   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.857966   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.857992   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.858015   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.857324   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.858074   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.858091   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.858125   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.858302   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.858169   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.858190   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.858432   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.858201   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.858769   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.858282   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.858895   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.859296   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.859311   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.859331   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.859336   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.859339   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.859343   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.859352   16964 addons.go:470] Verifying addon ingress=true in "addons-480837"
	I0315 05:58:34.862793   16964 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-480837 service yakd-dashboard -n yakd-dashboard
	
	I0315 05:58:34.859653   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.859661   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.859669   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.859692   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.860645   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.860672   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.864322   16964 out.go:177] * Verifying ingress addon...
	I0315 05:58:34.864341   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.866042   16964 addons.go:470] Verifying addon metrics-server=true in "addons-480837"
	I0315 05:58:34.864351   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.864355   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:34.866082   16964 addons.go:470] Verifying addon registry=true in "addons-480837"
	I0315 05:58:34.867672   16964 out.go:177] * Verifying registry addon...
	I0315 05:58:34.866698   16964 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0315 05:58:34.869629   16964 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0315 05:58:34.881265   16964 node_ready.go:49] node "addons-480837" has status "Ready":"True"
	I0315 05:58:34.881290   16964 node_ready.go:38] duration metric: took 26.96454ms for node "addons-480837" to be "Ready" ...
	I0315 05:58:34.881301   16964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 05:58:34.913498   16964 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0315 05:58:34.913531   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:34.952681   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.952705   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.953011   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.953030   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	W0315 05:58:34.953122   16964 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0315 05:58:34.953471   16964 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0315 05:58:34.953491   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:34.956025   16964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace to be "Ready" ...
	I0315 05:58:34.983437   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:34.983455   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:34.983761   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:34.983823   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:34.983840   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:35.081768   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 05:58:35.362574   16964 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-480837" context rescaled to 1 replicas
	I0315 05:58:35.378386   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:35.378776   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:36.009230   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:36.013485   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:36.440506   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:36.440797   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:36.446928   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.176556951s)
	I0315 05:58:36.446953   16964 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.984072379s)
	I0315 05:58:36.446978   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:36.446990   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:36.449346   16964 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 05:58:36.447252   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:36.447277   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:36.449396   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:36.451141   16964 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0315 05:58:36.452579   16964 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0315 05:58:36.452599   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0315 05:58:36.451161   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:36.452633   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:36.452962   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:36.453008   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:36.453027   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:36.453038   16964 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-480837"
	I0315 05:58:36.454796   16964 out.go:177] * Verifying csi-hostpath-driver addon...
	I0315 05:58:36.456924   16964 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0315 05:58:36.503057   16964 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0315 05:58:36.503077   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0315 05:58:36.545857   16964 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 05:58:36.545876   16964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0315 05:58:36.551749   16964 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0315 05:58:36.551767   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:36.589262   16964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 05:58:36.924247   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:36.924428   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:36.982113   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:36.987401   16964 pod_ready.go:102] pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:37.383574   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:37.383775   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:37.478530   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:37.872913   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:37.879304   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:37.963033   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:38.286930   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.205114342s)
	I0315 05:58:38.286997   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:38.287014   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:38.287289   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:38.287309   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:38.287320   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:38.287329   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:38.287563   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:38.287605   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:38.287616   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:38.372555   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:38.382459   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:38.475111   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:38.785939   16964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.196631762s)
	I0315 05:58:38.786005   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:38.786032   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:38.786355   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:38.786377   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:38.786383   16964 main.go:141] libmachine: (addons-480837) DBG | Closing plugin on server side
	I0315 05:58:38.786387   16964 main.go:141] libmachine: Making call to close driver server
	I0315 05:58:38.786500   16964 main.go:141] libmachine: (addons-480837) Calling .Close
	I0315 05:58:38.786813   16964 main.go:141] libmachine: Successfully made call to close driver server
	I0315 05:58:38.786829   16964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 05:58:38.787609   16964 addons.go:470] Verifying addon gcp-auth=true in "addons-480837"
	I0315 05:58:38.789389   16964 out.go:177] * Verifying gcp-auth addon...
	I0315 05:58:38.791375   16964 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0315 05:58:38.794843   16964 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0315 05:58:38.794865   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:38.889299   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:38.890369   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:38.967029   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:39.295863   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:39.373000   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:39.376979   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:39.463430   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:39.471392   16964 pod_ready.go:102] pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:39.795848   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:39.875037   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:39.875367   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:39.964920   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:40.295363   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:40.372491   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:40.375957   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:40.462703   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:40.796341   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:40.871617   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:40.874630   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:40.963300   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:41.296561   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:41.372914   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:41.376068   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:41.463232   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:41.798728   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:41.875392   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:41.875700   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:41.972219   16964 pod_ready.go:102] pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:41.972348   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:42.296596   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:42.373799   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:42.377942   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:42.744800   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:42.796080   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:42.872643   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:42.875718   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:42.962575   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:42.965757   16964 pod_ready.go:97] pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.159 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-15 05:58:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 05:58:27 +0000 UTC,FinishedAt:2024-03-15 05:58:40 +0000 UTC,ContainerID:cri-o://65478f8cc5b861b6164b26b83e4cb6453e334cf1bed852b4177d0f6d4d5f1bec,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://65478f8cc5b861b6164b26b83e4cb6453e334cf1bed852b4177d0f6d4d5f1bec Started:0xc002f82470 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 05:58:42.965783   16964 pod_ready.go:81] duration metric: took 8.009737609s for pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace to be "Ready" ...
	E0315 05:58:42.965795   16964 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-lx9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:25 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 05:58:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.159 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-15 05:58:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 05:58:27 +0000 UTC,FinishedAt:2024-03-15 05:58:40 +0000 UTC,ContainerID:cri-o://65478f8cc5b861b6164b26b83e4cb6453e334cf1bed852b4177d0f6d4d5f1bec,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://65478f8cc5b861b6164b26b83e4cb6453e334cf1bed852b4177d0f6d4d5f1bec Started:0xc002f82470 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 05:58:42.965801   16964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace to be "Ready" ...
	I0315 05:58:43.296392   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:43.372174   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:43.375025   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:43.462787   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:43.796127   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:43.872500   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:43.874879   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:43.963298   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:44.296351   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:44.374084   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:44.375752   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:44.463372   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:44.796700   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:44.873552   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:44.876408   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:44.962175   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:44.971939   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:45.295797   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:45.372383   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:45.374939   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:45.471863   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:45.796241   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:45.872914   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:45.876583   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:45.962308   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:46.296030   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:46.372848   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:46.375298   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:46.465419   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:46.796401   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:46.873143   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:46.876777   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:46.964382   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:46.972285   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:47.296256   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:47.372566   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:47.376574   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:47.463153   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:47.795505   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:47.875495   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:47.875742   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:47.963289   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:48.295703   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:48.374582   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:48.375355   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:48.462259   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:48.796407   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:48.872717   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:48.875265   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:48.963423   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:49.298327   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:49.372257   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:49.375432   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:49.463207   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:49.481297   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:49.796382   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:49.873005   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:49.875460   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:49.964006   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:50.295792   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:50.373124   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:50.375609   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:50.464546   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:50.795989   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:50.874021   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:50.875834   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:50.962935   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:51.307450   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:51.373129   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:51.379659   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:51.475483   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:51.498448   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:51.795010   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:51.874964   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:51.875784   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:51.962875   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:52.295194   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:52.372863   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:52.375891   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:52.463227   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:52.796897   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:52.873707   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:52.875709   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:52.963317   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:53.296265   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:53.377664   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:53.379695   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:53.466416   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:53.796284   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:53.874756   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:53.875886   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:53.962479   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:53.974621   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:54.295977   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:54.372944   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:54.374822   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:54.465021   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:54.795109   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:54.872574   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:54.874692   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:54.963713   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:55.294979   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:55.375559   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:55.376225   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:55.462435   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:55.796911   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:55.873538   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:55.874885   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:55.963811   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:56.295842   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:56.373667   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:56.375117   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:56.462959   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:56.472142   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:56.795896   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:56.880310   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:56.880589   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:56.962814   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:57.294980   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:57.374046   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:57.376532   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:57.462526   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:57.794988   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:57.872392   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:57.879067   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:57.962725   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:58.295371   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:58.383193   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:58.383317   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:58.462983   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:58.795479   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:58.872869   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:58.876219   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:58.962587   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:58.972536   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:58:59.295205   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:59.868439   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:58:59.869786   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:59.873014   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:59.873609   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:58:59.883668   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:58:59.883911   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:58:59.963088   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:00.295687   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:00.373397   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:00.377028   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:00.463476   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:00.795932   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:00.872308   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:00.881697   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:00.963667   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:01.296053   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:01.372220   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:01.374483   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:01.462472   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:01.472657   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:59:01.795818   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:01.874511   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:01.875778   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:01.961779   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:02.294993   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:02.374365   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:02.381888   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:02.462623   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:02.796837   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:02.873299   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:02.874532   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:02.963679   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:03.295127   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:03.371727   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:03.374695   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:03.463841   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:03.476099   16964 pod_ready.go:102] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"False"
	I0315 05:59:03.796944   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:03.874435   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:03.875447   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:03.962500   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:04.296011   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:04.372609   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:04.375756   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:04.464012   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:04.795279   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:04.873472   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:04.875360   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:04.961752   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:05.295608   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:05.399908   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:05.403693   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:05.465186   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:05.472191   16964 pod_ready.go:92] pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.472212   16964 pod_ready.go:81] duration metric: took 22.506403756s for pod "coredns-5dd5756b68-qkwrq" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.472221   16964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.478523   16964 pod_ready.go:92] pod "etcd-addons-480837" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.478541   16964 pod_ready.go:81] duration metric: took 6.314827ms for pod "etcd-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.478549   16964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.483142   16964 pod_ready.go:92] pod "kube-apiserver-addons-480837" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.483159   16964 pod_ready.go:81] duration metric: took 4.603776ms for pod "kube-apiserver-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.483166   16964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.487669   16964 pod_ready.go:92] pod "kube-controller-manager-addons-480837" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.487685   16964 pod_ready.go:81] duration metric: took 4.513428ms for pod "kube-controller-manager-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.487695   16964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wdw4w" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.492156   16964 pod_ready.go:92] pod "kube-proxy-wdw4w" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.492173   16964 pod_ready.go:81] duration metric: took 4.472542ms for pod "kube-proxy-wdw4w" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.492180   16964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.794912   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:05.870619   16964 pod_ready.go:92] pod "kube-scheduler-addons-480837" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:05.870645   16964 pod_ready.go:81] duration metric: took 378.457785ms for pod "kube-scheduler-addons-480837" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.870659   16964 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-zb4x6" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:05.873622   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:05.875504   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:05.974049   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:06.270395   16964 pod_ready.go:92] pod "metrics-server-69cf46c98-zb4x6" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:06.270420   16964 pod_ready.go:81] duration metric: took 399.753087ms for pod "metrics-server-69cf46c98-zb4x6" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:06.270433   16964 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bkftz" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:06.296804   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:06.373639   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:06.375236   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:06.463314   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:06.670514   16964 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bkftz" in "kube-system" namespace has status "Ready":"True"
	I0315 05:59:06.670538   16964 pod_ready.go:81] duration metric: took 400.097077ms for pod "nvidia-device-plugin-daemonset-bkftz" in "kube-system" namespace to be "Ready" ...
	I0315 05:59:06.670555   16964 pod_ready.go:38] duration metric: took 31.789242132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 05:59:06.670642   16964 api_server.go:52] waiting for apiserver process to appear ...
	I0315 05:59:06.670713   16964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 05:59:06.707714   16964 api_server.go:72] duration metric: took 41.591953064s to wait for apiserver process to appear ...
	I0315 05:59:06.707743   16964 api_server.go:88] waiting for apiserver healthz status ...
	I0315 05:59:06.707765   16964 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0315 05:59:06.717846   16964 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0315 05:59:06.722608   16964 api_server.go:141] control plane version: v1.28.4
	I0315 05:59:06.722637   16964 api_server.go:131] duration metric: took 14.886682ms to wait for apiserver health ...
	I0315 05:59:06.722648   16964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 05:59:06.799921   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:06.877643   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:06.877822   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:06.881399   16964 system_pods.go:59] 18 kube-system pods found
	I0315 05:59:06.881427   16964 system_pods.go:61] "coredns-5dd5756b68-qkwrq" [1b6307d6-d4a4-4738-a3bb-123259d724cb] Running
	I0315 05:59:06.881438   16964 system_pods.go:61] "csi-hostpath-attacher-0" [40de6147-26c5-48a0-9fbb-a871c5358297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 05:59:06.881448   16964 system_pods.go:61] "csi-hostpath-resizer-0" [327a7a02-4ea0-42c7-8fe1-250a3f45170d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 05:59:06.881464   16964 system_pods.go:61] "csi-hostpathplugin-5swzw" [b2adac73-7cd0-4584-9f3f-671f369cb8e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 05:59:06.881474   16964 system_pods.go:61] "etcd-addons-480837" [ccc2092c-6839-4377-b7a3-05f197510db4] Running
	I0315 05:59:06.881488   16964 system_pods.go:61] "kube-apiserver-addons-480837" [8cb961de-df16-40c6-aae3-1eb79ae32476] Running
	I0315 05:59:06.881495   16964 system_pods.go:61] "kube-controller-manager-addons-480837" [7dda84a0-1301-4486-a6be-0ff5e689f786] Running
	I0315 05:59:06.881503   16964 system_pods.go:61] "kube-ingress-dns-minikube" [4c71f38e-185d-45d5-8176-d657c024205c] Running
	I0315 05:59:06.881508   16964 system_pods.go:61] "kube-proxy-wdw4w" [b793e4ee-9cd9-48d1-a6aa-e04fa427dc31] Running
	I0315 05:59:06.881514   16964 system_pods.go:61] "kube-scheduler-addons-480837" [30918671-32ca-4bdb-ae99-347c1e65cc92] Running
	I0315 05:59:06.881520   16964 system_pods.go:61] "metrics-server-69cf46c98-zb4x6" [79966cb5-86ce-4eae-9118-53d41994e123] Running
	I0315 05:59:06.881525   16964 system_pods.go:61] "nvidia-device-plugin-daemonset-bkftz" [e697891a-18dc-4004-8601-eff9e689acb4] Running
	I0315 05:59:06.881531   16964 system_pods.go:61] "registry-ms4xl" [57ba4009-bd31-45f4-8d43-f0fe7246bac5] Running
	I0315 05:59:06.881544   16964 system_pods.go:61] "registry-proxy-hsttb" [fddd7b1b-3abb-4bd0-a7d4-a205d2b263b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 05:59:06.881558   16964 system_pods.go:61] "snapshot-controller-58dbcc7b99-dc9fg" [8ba321fc-1607-45a5-9819-e9a0b499c6bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 05:59:06.881572   16964 system_pods.go:61] "snapshot-controller-58dbcc7b99-nqr8k" [f7ff4eb1-03a5-4e06-b74b-695154a55751] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 05:59:06.881581   16964 system_pods.go:61] "storage-provisioner" [7b0e05fc-d57b-47dd-a9a2-d52b27705a11] Running
	I0315 05:59:06.881590   16964 system_pods.go:61] "tiller-deploy-7b677967b9-g6cbc" [779f003c-8e64-4909-ae2e-adaa744eaddf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0315 05:59:06.881602   16964 system_pods.go:74] duration metric: took 158.946883ms to wait for pod list to return data ...
	I0315 05:59:06.881615   16964 default_sa.go:34] waiting for default service account to be created ...
	I0315 05:59:06.962741   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:07.070258   16964 default_sa.go:45] found service account: "default"
	I0315 05:59:07.070282   16964 default_sa.go:55] duration metric: took 188.657564ms for default service account to be created ...
	I0315 05:59:07.070290   16964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 05:59:07.291903   16964 system_pods.go:86] 18 kube-system pods found
	I0315 05:59:07.291933   16964 system_pods.go:89] "coredns-5dd5756b68-qkwrq" [1b6307d6-d4a4-4738-a3bb-123259d724cb] Running
	I0315 05:59:07.291946   16964 system_pods.go:89] "csi-hostpath-attacher-0" [40de6147-26c5-48a0-9fbb-a871c5358297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 05:59:07.291955   16964 system_pods.go:89] "csi-hostpath-resizer-0" [327a7a02-4ea0-42c7-8fe1-250a3f45170d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 05:59:07.291965   16964 system_pods.go:89] "csi-hostpathplugin-5swzw" [b2adac73-7cd0-4584-9f3f-671f369cb8e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 05:59:07.291974   16964 system_pods.go:89] "etcd-addons-480837" [ccc2092c-6839-4377-b7a3-05f197510db4] Running
	I0315 05:59:07.291982   16964 system_pods.go:89] "kube-apiserver-addons-480837" [8cb961de-df16-40c6-aae3-1eb79ae32476] Running
	I0315 05:59:07.291989   16964 system_pods.go:89] "kube-controller-manager-addons-480837" [7dda84a0-1301-4486-a6be-0ff5e689f786] Running
	I0315 05:59:07.292001   16964 system_pods.go:89] "kube-ingress-dns-minikube" [4c71f38e-185d-45d5-8176-d657c024205c] Running
	I0315 05:59:07.292012   16964 system_pods.go:89] "kube-proxy-wdw4w" [b793e4ee-9cd9-48d1-a6aa-e04fa427dc31] Running
	I0315 05:59:07.292019   16964 system_pods.go:89] "kube-scheduler-addons-480837" [30918671-32ca-4bdb-ae99-347c1e65cc92] Running
	I0315 05:59:07.292026   16964 system_pods.go:89] "metrics-server-69cf46c98-zb4x6" [79966cb5-86ce-4eae-9118-53d41994e123] Running
	I0315 05:59:07.292033   16964 system_pods.go:89] "nvidia-device-plugin-daemonset-bkftz" [e697891a-18dc-4004-8601-eff9e689acb4] Running
	I0315 05:59:07.292040   16964 system_pods.go:89] "registry-ms4xl" [57ba4009-bd31-45f4-8d43-f0fe7246bac5] Running
	I0315 05:59:07.292048   16964 system_pods.go:89] "registry-proxy-hsttb" [fddd7b1b-3abb-4bd0-a7d4-a205d2b263b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 05:59:07.292058   16964 system_pods.go:89] "snapshot-controller-58dbcc7b99-dc9fg" [8ba321fc-1607-45a5-9819-e9a0b499c6bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 05:59:07.292068   16964 system_pods.go:89] "snapshot-controller-58dbcc7b99-nqr8k" [f7ff4eb1-03a5-4e06-b74b-695154a55751] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 05:59:07.292074   16964 system_pods.go:89] "storage-provisioner" [7b0e05fc-d57b-47dd-a9a2-d52b27705a11] Running
	I0315 05:59:07.292082   16964 system_pods.go:89] "tiller-deploy-7b677967b9-g6cbc" [779f003c-8e64-4909-ae2e-adaa744eaddf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0315 05:59:07.292093   16964 system_pods.go:126] duration metric: took 221.797033ms to wait for k8s-apps to be running ...
	I0315 05:59:07.292104   16964 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 05:59:07.292163   16964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 05:59:07.296785   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:07.313745   16964 system_svc.go:56] duration metric: took 21.635312ms WaitForService to wait for kubelet
	I0315 05:59:07.313789   16964 kubeadm.go:576] duration metric: took 42.198021836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 05:59:07.313807   16964 node_conditions.go:102] verifying NodePressure condition ...
	I0315 05:59:07.373391   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:07.375476   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:07.462995   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:07.470292   16964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 05:59:07.470315   16964 node_conditions.go:123] node cpu capacity is 2
	I0315 05:59:07.470326   16964 node_conditions.go:105] duration metric: took 156.514635ms to run NodePressure ...
	I0315 05:59:07.470337   16964 start.go:240] waiting for startup goroutines ...
	I0315 05:59:07.795175   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:07.879406   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:07.884115   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:08.277652   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:08.296148   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:08.373025   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:08.376974   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:08.471017   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:08.797062   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:08.872535   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:08.875263   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:08.962734   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:09.296191   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:09.373129   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:09.375345   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:09.465097   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:09.796584   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:09.873100   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:09.877270   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:09.965999   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:10.301234   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:10.375820   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:10.378784   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:10.463494   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:10.820754   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:10.892023   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:10.892222   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:10.963056   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:11.295600   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:11.372567   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:11.376652   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:11.462497   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:11.796783   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:11.872546   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:11.875195   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:11.963631   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:12.384517   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:12.385081   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:12.387320   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:12.464114   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:12.795497   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:12.874499   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:12.877466   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:12.963361   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:13.295011   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:13.372018   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:13.374486   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:13.462318   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:13.795861   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:13.875346   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:13.881154   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:14.299652   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:14.300662   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:14.374520   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:14.375242   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:14.463133   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:14.795504   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:14.872714   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:14.876399   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:14.962663   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:15.296202   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:15.372782   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:15.375728   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:15.463067   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:15.796131   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:15.893251   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:15.894561   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:15.962774   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:16.298096   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:16.372330   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:16.375543   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:16.462759   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:16.796531   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:16.980239   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:16.981630   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:16.982049   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:17.295265   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:17.374875   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:17.377553   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:17.465197   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:17.795600   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:17.874636   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:17.877575   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:17.964832   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:18.296762   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:18.372701   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:18.374150   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:18.462990   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:18.795769   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:18.878860   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:18.879081   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:18.963058   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:19.295540   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:19.374434   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:19.377701   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:19.463068   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:19.799836   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:19.879283   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:19.881236   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:19.963103   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:20.296072   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:20.372604   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:20.375921   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:20.492191   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:20.795951   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:20.878599   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:20.879624   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:20.964660   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:21.295692   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:21.373578   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:21.375506   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:21.463023   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:21.795220   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:21.877646   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:21.880344   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:21.963355   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:22.295792   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:22.374798   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:22.376866   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:22.463156   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:22.795679   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:22.874639   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:22.878306   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:22.963255   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:23.295587   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:23.372949   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:23.376643   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:23.463973   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:23.795292   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:23.872643   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:23.875838   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:23.963340   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:24.294748   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:24.375114   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:24.379100   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:24.462948   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:24.795026   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:24.879822   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:24.886742   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:24.962919   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:25.296384   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:25.372650   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:25.376479   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:25.463331   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:25.795529   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:25.873169   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:25.875569   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:25.963902   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:26.296442   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:26.673278   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:26.681039   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:26.682248   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:26.795779   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:26.873106   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:26.875891   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:26.965285   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:27.298441   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:27.376208   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:27.377918   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:27.463284   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:27.796882   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:27.872889   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:27.876958   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 05:59:27.963913   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:28.296024   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:28.372548   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:28.375170   16964 kapi.go:107] duration metric: took 53.505525111s to wait for kubernetes.io/minikube-addons=registry ...
	I0315 05:59:28.464037   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:28.796434   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:28.872906   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:28.963188   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:29.296854   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:29.372653   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:29.464918   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:29.795800   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:29.873096   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:29.982045   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:30.297141   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:30.373616   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:30.463702   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:30.795976   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:30.874602   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:30.964778   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:31.295948   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:31.372718   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:31.464372   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:31.796667   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:31.874405   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:31.963786   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:32.295823   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:32.373218   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:32.463548   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:32.795916   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:32.874157   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:32.962475   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:33.297446   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:33.373196   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:33.463158   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:33.795037   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:33.874257   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:33.963325   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:34.295251   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:34.372760   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:34.463614   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:34.795845   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:34.874649   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:34.966424   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:35.296740   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:35.378307   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:35.464956   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:35.795751   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:35.876538   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:35.963593   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:36.295239   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:36.373735   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:36.462895   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:36.796214   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:36.873049   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:36.963154   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:37.296059   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:37.372378   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:37.463847   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:37.796290   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:37.872319   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:37.962693   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:38.296737   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:38.373540   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:38.463968   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:38.795128   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:38.874232   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:38.963215   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:39.297402   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:39.373572   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:39.463610   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:39.798091   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:39.876964   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:39.963218   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:40.295691   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:40.372767   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:40.463614   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:40.795138   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:40.872097   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:40.963608   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:41.296198   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:41.380582   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:41.463028   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:41.797015   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:41.872234   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:41.963491   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:42.295656   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:42.372852   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:42.470156   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:42.855074   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:42.882405   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:42.963826   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:43.295975   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:43.372079   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:43.462806   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:43.795531   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:43.872768   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:43.963930   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:44.295153   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:44.373109   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:44.463258   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:44.801893   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:44.873783   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:45.031700   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:45.295560   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:45.372960   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:45.462603   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:45.795902   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:45.873033   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:45.963391   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:46.295990   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:46.660926   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:46.665516   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:46.795657   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:46.872960   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:46.963116   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:47.295860   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:47.373200   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:47.462561   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:47.796231   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:47.876432   16964 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 05:59:47.964412   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:48.296790   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:48.380110   16964 kapi.go:107] duration metric: took 1m13.513408961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0315 05:59:48.466551   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:48.795532   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:48.964838   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:49.302366   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:49.463127   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:49.795194   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:49.965470   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:50.295449   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:50.462943   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:50.795494   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:50.965000   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:51.295939   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:51.463536   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:51.796208   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:51.962846   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:52.296008   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:52.467762   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:52.798421   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:53.388393   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:53.392753   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:53.466022   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:53.794892   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 05:59:53.966465   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:54.302649   16964 kapi.go:107] duration metric: took 1m15.511268622s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0315 05:59:54.304492   16964 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-480837 cluster.
	I0315 05:59:54.305844   16964 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0315 05:59:54.307085   16964 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0315 05:59:54.463681   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:54.964705   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:55.462391   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:55.962886   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:56.465952   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:56.962868   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:57.463005   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:57.963161   16964 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 05:59:58.463438   16964 kapi.go:107] duration metric: took 1m22.006511448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0315 05:59:58.465225   16964 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, storage-provisioner, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0315 05:59:58.466534   16964 addons.go:505] duration metric: took 1m33.350756183s for enable addons: enabled=[ingress-dns nvidia-device-plugin helm-tiller cloud-spanner storage-provisioner yakd metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0315 05:59:58.466575   16964 start.go:245] waiting for cluster config update ...
	I0315 05:59:58.466602   16964 start.go:254] writing updated cluster config ...
	I0315 05:59:58.466874   16964 ssh_runner.go:195] Run: rm -f paused
	I0315 05:59:58.520448   16964 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 05:59:58.522331   16964 out.go:177] * Done! kubectl is now configured to use "addons-480837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.663654818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710482583663628024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc3ed0cd-0ed3-4e45-aa23-0a5f6ed73281 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.664516409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43fa96f0-33d0-47ff-95f6-6abfbd13c287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.664620374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43fa96f0-33d0-47ff-95f6-6abfbd13c287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.664940328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:953dcb999388cdf8d623c667476966acc25e02be2953a8461a992d67fbcc6f2f,PodSandboxId:a3b0088a66d7448db0bd36ef7333a4e340de5bf65d55c57788d7382e5b5104ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710482576227717652,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zzpvx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4f6f023-f61f-4d11-8447-580b875fe665,},Annotations:map[string]string{io.kubernetes.container.hash: cee653a8,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a468dbf5d82f18230af29aec76e9f190172223885d6ff88aba1258f3506ef8,PodSandboxId:03bb3bac1cb8f4594e2462451c13dbb4a7ee7765c107a36686053c1387e56df6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710482446584331696,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-bj5g6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e00a2bd4-e141-4a45-9177-28f32d939937,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d41f0c06,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46173f89600c02ef8a181f9d75818ae6dad5803de2b6fff28c8a27698dccedd7,PodSandboxId:43680fbf9e980e523053fe5d8dbd0a259126536518bab22332f8c1b055603b64,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710482432924962493,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 6358bc9d-0837-4e49-ab72-c24ef4add6c7,},Annotations:map[string]string{io.kubernetes.container.hash: 75c145ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e933f898f74fd5e743261e2efa2f4ab321aa5486f2daaba079c5599d9e36471,PodSandboxId:49941cabe2f2929def520f165538643435d845116bac46ecd8ba067e020319fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710482393475208094,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-bf4sl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fe0d56e9-7d27-48b6-87ab-fa1632bf6965,},Annotations:map[string]string{io.kubernetes.container.hash: ebd20c6a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9747fc0c66f78793f48c34408c4d7718ffce0d21febabad0612a6b8814d1d1,PodSandboxId:b9f20bad9c9ed45ef26861961f5a6b05730cc7d3c162cb8080e36f3af6025852,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710482375002115237,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-447pd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296967d2-1839-4187-8cbf-5a608a1418de,},Annotations:map[string]string{io.kubernetes.container.hash: f98af12d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f524793a19a4ae57f7a3e5e32b381c35f5f7dae89083627416330e31b40c0de0,PodSandboxId:6c9608c422040c9fc9081960b65b999e736cf081b0ae5776b16f17627405a437,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710482372569084172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4gt5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a67cb70b-7840-4c82-ac7f-630326c9fc77,},Annotations:map[string]string{io.kubernetes.container.hash: a68dd22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5293781330af604ed960170f965784292cc393cd2a683f7ce40ea678bc3abba,PodSandboxId:97cdf33fe28b1675c2a3c662fb62ae13a1739666d612b11a3fea64e8a257e0dd,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710482348395241937,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-dw7wt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a0138eb0-3436-42ac-afab-58d7218d3d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 211bb608,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1ec0aa2a4f76dae82c96d12c3b70396a2bd6682858be86eedd767343b6da5,PodSandboxId:0297812980de0ad0c3fc16db949f4aed6e343280c0a92ba7c3de89799131f124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710482314041922780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0e05fc-d57b-47dd-a9a2-d52b27705a11,},Annotations:map[string]string{io.kubernetes.container.hash: 699c39e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f9eab70b5ef16e299c62c3876c9bd3c3263de2a9e408a7129de9da09b97aa,PodSandboxId:fa8b543d6df185e047e6f3fedc0bb701e73a32d5ba31c2a7a2c9de14b01e811e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710482306845229025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkwrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b6307d6-d4a4-4738-a3bb-123259d724cb,},Annotations:map[string]string{io.kubernetes.container.hash: a667bd9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b304d05db6bff5c5dc3db5ac9efa09cb970a89f27608219d96c6307b321b1065,PodSandboxId:5086e06fd1496187b918103d74055f657f6216ce06316dc0f7ef3ca4af61d3
8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710482305848142535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdw4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793e4ee-9cd9-48d1-a6aa-e04fa427dc31,},Annotations:map[string]string{io.kubernetes.container.hash: 49006c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e136a462a8f5d8ad3d9114693b9dabba32f4eba73160cad0189225e4b1ccba58,PodSandboxId:03f69281a7c1cb9f6ba820a7deeca82ed4310528817261426da628e429f691ba,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710482286380231938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4eca1497f5acb7b66346b2ebb440a1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4780bae583ea92c213ef7510f24d850d249656d8e73f8d7b062d67c657fd3ea1,PodSandboxId:9979414dfefb581eb48f677b0851e577126f7cb7b04e2efefd89b45a1f0e9a88,Metadata:&ContainerMetadata{Name:kube-controlle
r-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710482286351116609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc95f66977d3177aa4e1f460ed2587c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a1fd8b8e03c4f6190ff1cfb9df796c99b340b8ca9ec7ad353d2d626f3cb937,PodSandboxId:37c3234d0fe71b3820df8266caa5502af96a774ebdf81f3b07efd5e37575598a,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710482286284493009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8035a91b8b4bbc5081a0b4f3fa5275a0,},Annotations:map[string]string{io.kubernetes.container.hash: 415fbc3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75e4e8114c28071ff8e79c737941ccacd621982f25201ecc27c8376d42c68cc,PodSandboxId:0241c9f8caa70695da7e1d596819523768c9af70df3a9b281fe53a6e540d4b3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710482286284679274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6730385048f8d50fd6779c2eb51807,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43fa96f0-33d0-47ff-95f6-6abfbd13c287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.705247952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e4307a3-efbb-4a5e-832c-5e37c036c850 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.705321937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e4307a3-efbb-4a5e-832c-5e37c036c850 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.707077212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55d447d2-e66d-41ae-843b-2bd658a5b1db name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.708309672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710482583708281590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55d447d2-e66d-41ae-843b-2bd658a5b1db name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.708925487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d4ebb25-de7b-4ac6-9b7f-67ff11f44a83 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.708980819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d4ebb25-de7b-4ac6-9b7f-67ff11f44a83 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.709287837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:953dcb999388cdf8d623c667476966acc25e02be2953a8461a992d67fbcc6f2f,PodSandboxId:a3b0088a66d7448db0bd36ef7333a4e340de5bf65d55c57788d7382e5b5104ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710482576227717652,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zzpvx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4f6f023-f61f-4d11-8447-580b875fe665,},Annotations:map[string]string{io.kubernetes.container.hash: cee653a8,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a468dbf5d82f18230af29aec76e9f190172223885d6ff88aba1258f3506ef8,PodSandboxId:03bb3bac1cb8f4594e2462451c13dbb4a7ee7765c107a36686053c1387e56df6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710482446584331696,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-bj5g6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e00a2bd4-e141-4a45-9177-28f32d939937,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d41f0c06,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46173f89600c02ef8a181f9d75818ae6dad5803de2b6fff28c8a27698dccedd7,PodSandboxId:43680fbf9e980e523053fe5d8dbd0a259126536518bab22332f8c1b055603b64,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710482432924962493,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 6358bc9d-0837-4e49-ab72-c24ef4add6c7,},Annotations:map[string]string{io.kubernetes.container.hash: 75c145ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e933f898f74fd5e743261e2efa2f4ab321aa5486f2daaba079c5599d9e36471,PodSandboxId:49941cabe2f2929def520f165538643435d845116bac46ecd8ba067e020319fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710482393475208094,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-bf4sl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fe0d56e9-7d27-48b6-87ab-fa1632bf6965,},Annotations:map[string]string{io.kubernetes.container.hash: ebd20c6a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9747fc0c66f78793f48c34408c4d7718ffce0d21febabad0612a6b8814d1d1,PodSandboxId:b9f20bad9c9ed45ef26861961f5a6b05730cc7d3c162cb8080e36f3af6025852,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710482375002115237,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-447pd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296967d2-1839-4187-8cbf-5a608a1418de,},Annotations:map[string]string{io.kubernetes.container.hash: f98af12d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f524793a19a4ae57f7a3e5e32b381c35f5f7dae89083627416330e31b40c0de0,PodSandboxId:6c9608c422040c9fc9081960b65b999e736cf081b0ae5776b16f17627405a437,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710482372569084172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4gt5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a67cb70b-7840-4c82-ac7f-630326c9fc77,},Annotations:map[string]string{io.kubernetes.container.hash: a68dd22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5293781330af604ed960170f965784292cc393cd2a683f7ce40ea678bc3abba,PodSandboxId:97cdf33fe28b1675c2a3c662fb62ae13a1739666d612b11a3fea64e8a257e0dd,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710482348395241937,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-dw7wt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a0138eb0-3436-42ac-afab-58d7218d3d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 211bb608,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1ec0aa2a4f76dae82c96d12c3b70396a2bd6682858be86eedd767343b6da5,PodSandboxId:0297812980de0ad0c3fc16db949f4aed6e343280c0a92ba7c3de89799131f124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710482314041922780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0e05fc-d57b-47dd-a9a2-d52b27705a11,},Annotations:map[string]string{io.kubernetes.container.hash: 699c39e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f9eab70b5ef16e299c62c3876c9bd3c3263de2a9e408a7129de9da09b97aa,PodSandboxId:fa8b543d6df185e047e6f3fedc0bb701e73a32d5ba31c2a7a2c9de14b01e811e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710482306845229025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkwrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b6307d6-d4a4-4738-a3bb-123259d724cb,},Annotations:map[string]string{io.kubernetes.container.hash: a667bd9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b304d05db6bff5c5dc3db5ac9efa09cb970a89f27608219d96c6307b321b1065,PodSandboxId:5086e06fd1496187b918103d74055f657f6216ce06316dc0f7ef3ca4af61d3
8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710482305848142535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdw4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793e4ee-9cd9-48d1-a6aa-e04fa427dc31,},Annotations:map[string]string{io.kubernetes.container.hash: 49006c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e136a462a8f5d8ad3d9114693b9dabba32f4eba73160cad0189225e4b1ccba58,PodSandboxId:03f69281a7c1cb9f6ba820a7deeca82ed4310528817261426da628e429f691ba,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710482286380231938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4eca1497f5acb7b66346b2ebb440a1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4780bae583ea92c213ef7510f24d850d249656d8e73f8d7b062d67c657fd3ea1,PodSandboxId:9979414dfefb581eb48f677b0851e577126f7cb7b04e2efefd89b45a1f0e9a88,Metadata:&ContainerMetadata{Name:kube-controlle
r-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710482286351116609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc95f66977d3177aa4e1f460ed2587c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a1fd8b8e03c4f6190ff1cfb9df796c99b340b8ca9ec7ad353d2d626f3cb937,PodSandboxId:37c3234d0fe71b3820df8266caa5502af96a774ebdf81f3b07efd5e37575598a,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710482286284493009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8035a91b8b4bbc5081a0b4f3fa5275a0,},Annotations:map[string]string{io.kubernetes.container.hash: 415fbc3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75e4e8114c28071ff8e79c737941ccacd621982f25201ecc27c8376d42c68cc,PodSandboxId:0241c9f8caa70695da7e1d596819523768c9af70df3a9b281fe53a6e540d4b3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710482286284679274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6730385048f8d50fd6779c2eb51807,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d4ebb25-de7b-4ac6-9b7f-67ff11f44a83 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.752375648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f87170f-c79b-4542-990a-bdddbb578fe3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.752447714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f87170f-c79b-4542-990a-bdddbb578fe3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.754045194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e37d24b-004d-4c2d-8dcc-8d0304d8723c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.755310442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710482583755282692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e37d24b-004d-4c2d-8dcc-8d0304d8723c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.755898677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83bef7e1-22f5-4f3c-b56b-bea7fe5eddb4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.755995556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83bef7e1-22f5-4f3c-b56b-bea7fe5eddb4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.756488722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:953dcb999388cdf8d623c667476966acc25e02be2953a8461a992d67fbcc6f2f,PodSandboxId:a3b0088a66d7448db0bd36ef7333a4e340de5bf65d55c57788d7382e5b5104ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710482576227717652,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zzpvx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4f6f023-f61f-4d11-8447-580b875fe665,},Annotations:map[string]string{io.kubernetes.container.hash: cee653a8,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a468dbf5d82f18230af29aec76e9f190172223885d6ff88aba1258f3506ef8,PodSandboxId:03bb3bac1cb8f4594e2462451c13dbb4a7ee7765c107a36686053c1387e56df6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710482446584331696,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-bj5g6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e00a2bd4-e141-4a45-9177-28f32d939937,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d41f0c06,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46173f89600c02ef8a181f9d75818ae6dad5803de2b6fff28c8a27698dccedd7,PodSandboxId:43680fbf9e980e523053fe5d8dbd0a259126536518bab22332f8c1b055603b64,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710482432924962493,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 6358bc9d-0837-4e49-ab72-c24ef4add6c7,},Annotations:map[string]string{io.kubernetes.container.hash: 75c145ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e933f898f74fd5e743261e2efa2f4ab321aa5486f2daaba079c5599d9e36471,PodSandboxId:49941cabe2f2929def520f165538643435d845116bac46ecd8ba067e020319fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710482393475208094,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-bf4sl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fe0d56e9-7d27-48b6-87ab-fa1632bf6965,},Annotations:map[string]string{io.kubernetes.container.hash: ebd20c6a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9747fc0c66f78793f48c34408c4d7718ffce0d21febabad0612a6b8814d1d1,PodSandboxId:b9f20bad9c9ed45ef26861961f5a6b05730cc7d3c162cb8080e36f3af6025852,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710482375002115237,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-447pd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296967d2-1839-4187-8cbf-5a608a1418de,},Annotations:map[string]string{io.kubernetes.container.hash: f98af12d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f524793a19a4ae57f7a3e5e32b381c35f5f7dae89083627416330e31b40c0de0,PodSandboxId:6c9608c422040c9fc9081960b65b999e736cf081b0ae5776b16f17627405a437,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710482372569084172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4gt5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a67cb70b-7840-4c82-ac7f-630326c9fc77,},Annotations:map[string]string{io.kubernetes.container.hash: a68dd22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5293781330af604ed960170f965784292cc393cd2a683f7ce40ea678bc3abba,PodSandboxId:97cdf33fe28b1675c2a3c662fb62ae13a1739666d612b11a3fea64e8a257e0dd,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710482348395241937,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-dw7wt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a0138eb0-3436-42ac-afab-58d7218d3d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 211bb608,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1ec0aa2a4f76dae82c96d12c3b70396a2bd6682858be86eedd767343b6da5,PodSandboxId:0297812980de0ad0c3fc16db949f4aed6e343280c0a92ba7c3de89799131f124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710482314041922780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0e05fc-d57b-47dd-a9a2-d52b27705a11,},Annotations:map[string]string{io.kubernetes.container.hash: 699c39e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f9eab70b5ef16e299c62c3876c9bd3c3263de2a9e408a7129de9da09b97aa,PodSandboxId:fa8b543d6df185e047e6f3fedc0bb701e73a32d5ba31c2a7a2c9de14b01e811e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710482306845229025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkwrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b6307d6-d4a4-4738-a3bb-123259d724cb,},Annotations:map[string]string{io.kubernetes.container.hash: a667bd9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b304d05db6bff5c5dc3db5ac9efa09cb970a89f27608219d96c6307b321b1065,PodSandboxId:5086e06fd1496187b918103d74055f657f6216ce06316dc0f7ef3ca4af61d3
8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710482305848142535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdw4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793e4ee-9cd9-48d1-a6aa-e04fa427dc31,},Annotations:map[string]string{io.kubernetes.container.hash: 49006c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e136a462a8f5d8ad3d9114693b9dabba32f4eba73160cad0189225e4b1ccba58,PodSandboxId:03f69281a7c1cb9f6ba820a7deeca82ed4310528817261426da628e429f691ba,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710482286380231938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4eca1497f5acb7b66346b2ebb440a1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4780bae583ea92c213ef7510f24d850d249656d8e73f8d7b062d67c657fd3ea1,PodSandboxId:9979414dfefb581eb48f677b0851e577126f7cb7b04e2efefd89b45a1f0e9a88,Metadata:&ContainerMetadata{Name:kube-controlle
r-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710482286351116609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc95f66977d3177aa4e1f460ed2587c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a1fd8b8e03c4f6190ff1cfb9df796c99b340b8ca9ec7ad353d2d626f3cb937,PodSandboxId:37c3234d0fe71b3820df8266caa5502af96a774ebdf81f3b07efd5e37575598a,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710482286284493009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8035a91b8b4bbc5081a0b4f3fa5275a0,},Annotations:map[string]string{io.kubernetes.container.hash: 415fbc3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75e4e8114c28071ff8e79c737941ccacd621982f25201ecc27c8376d42c68cc,PodSandboxId:0241c9f8caa70695da7e1d596819523768c9af70df3a9b281fe53a6e540d4b3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710482286284679274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6730385048f8d50fd6779c2eb51807,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83bef7e1-22f5-4f3c-b56b-bea7fe5eddb4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.796643250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5722ae3d-5488-4481-a5e5-0bcf7b5684c1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.796715323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5722ae3d-5488-4481-a5e5-0bcf7b5684c1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.797953267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adc7f337-94b5-41dc-b964-34aa30ce1860 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.799969039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710482583799936475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adc7f337-94b5-41dc-b964-34aa30ce1860 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.800668550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e29ad281-d711-4dc3-ab9a-f1858d481d28 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.800726076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e29ad281-d711-4dc3-ab9a-f1858d481d28 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:03:03 addons-480837 crio[678]: time="2024-03-15 06:03:03.801074646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:953dcb999388cdf8d623c667476966acc25e02be2953a8461a992d67fbcc6f2f,PodSandboxId:a3b0088a66d7448db0bd36ef7333a4e340de5bf65d55c57788d7382e5b5104ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710482576227717652,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zzpvx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4f6f023-f61f-4d11-8447-580b875fe665,},Annotations:map[string]string{io.kubernetes.container.hash: cee653a8,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a468dbf5d82f18230af29aec76e9f190172223885d6ff88aba1258f3506ef8,PodSandboxId:03bb3bac1cb8f4594e2462451c13dbb4a7ee7765c107a36686053c1387e56df6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710482446584331696,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-bj5g6,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e00a2bd4-e141-4a45-9177-28f32d939937,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d41f0c06,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46173f89600c02ef8a181f9d75818ae6dad5803de2b6fff28c8a27698dccedd7,PodSandboxId:43680fbf9e980e523053fe5d8dbd0a259126536518bab22332f8c1b055603b64,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710482432924962493,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 6358bc9d-0837-4e49-ab72-c24ef4add6c7,},Annotations:map[string]string{io.kubernetes.container.hash: 75c145ca,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e933f898f74fd5e743261e2efa2f4ab321aa5486f2daaba079c5599d9e36471,PodSandboxId:49941cabe2f2929def520f165538643435d845116bac46ecd8ba067e020319fc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710482393475208094,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-bf4sl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fe0d56e9-7d27-48b6-87ab-fa1632bf6965,},Annotations:map[string]string{io.kubernetes.container.hash: ebd20c6a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9747fc0c66f78793f48c34408c4d7718ffce0d21febabad0612a6b8814d1d1,PodSandboxId:b9f20bad9c9ed45ef26861961f5a6b05730cc7d3c162cb8080e36f3af6025852,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710482375002115237,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-447pd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296967d2-1839-4187-8cbf-5a608a1418de,},Annotations:map[string]string{io.kubernetes.container.hash: f98af12d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f524793a19a4ae57f7a3e5e32b381c35f5f7dae89083627416330e31b40c0de0,PodSandboxId:6c9608c422040c9fc9081960b65b999e736cf081b0ae5776b16f17627405a437,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710482372569084172,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4gt5r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a67cb70b-7840-4c82-ac7f-630326c9fc77,},Annotations:map[string]string{io.kubernetes.container.hash: a68dd22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5293781330af604ed960170f965784292cc393cd2a683f7ce40ea678bc3abba,PodSandboxId:97cdf33fe28b1675c2a3c662fb62ae13a1739666d612b11a3fea64e8a257e0dd,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710482348395241937,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-dw7wt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a0138eb0-3436-42ac-afab-58d7218d3d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 211bb608,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1ec0aa2a4f76dae82c96d12c3b70396a2bd6682858be86eedd767343b6da5,PodSandboxId:0297812980de0ad0c3fc16db949f4aed6e343280c0a92ba7c3de89799131f124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710482314041922780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0e05fc-d57b-47dd-a9a2-d52b27705a11,},Annotations:map[string]string{io.kubernetes.container.hash: 699c39e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f9eab70b5ef16e299c62c3876c9bd3c3263de2a9e408a7129de9da09b97aa,PodSandboxId:fa8b543d6df185e047e6f3fedc0bb701e73a32d5ba31c2a7a2c9de14b01e811e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710482306845229025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkwrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b6307d6-d4a4-4738-a3bb-123259d724cb,},Annotations:map[string]string{io.kubernetes.container.hash: a667bd9e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b304d05db6bff5c5dc3db5ac9efa09cb970a89f27608219d96c6307b321b1065,PodSandboxId:5086e06fd1496187b918103d74055f657f6216ce06316dc0f7ef3ca4af61d3
8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710482305848142535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdw4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b793e4ee-9cd9-48d1-a6aa-e04fa427dc31,},Annotations:map[string]string{io.kubernetes.container.hash: 49006c76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e136a462a8f5d8ad3d9114693b9dabba32f4eba73160cad0189225e4b1ccba58,PodSandboxId:03f69281a7c1cb9f6ba820a7deeca82ed4310528817261426da628e429f691ba,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710482286380231938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4eca1497f5acb7b66346b2ebb440a1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4780bae583ea92c213ef7510f24d850d249656d8e73f8d7b062d67c657fd3ea1,PodSandboxId:9979414dfefb581eb48f677b0851e577126f7cb7b04e2efefd89b45a1f0e9a88,Metadata:&ContainerMetadata{Name:kube-controlle
r-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710482286351116609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc95f66977d3177aa4e1f460ed2587c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a1fd8b8e03c4f6190ff1cfb9df796c99b340b8ca9ec7ad353d2d626f3cb937,PodSandboxId:37c3234d0fe71b3820df8266caa5502af96a774ebdf81f3b07efd5e37575598a,Metadata:&ContainerMetadata{Name:etcd
,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710482286284493009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8035a91b8b4bbc5081a0b4f3fa5275a0,},Annotations:map[string]string{io.kubernetes.container.hash: 415fbc3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f75e4e8114c28071ff8e79c737941ccacd621982f25201ecc27c8376d42c68cc,PodSandboxId:0241c9f8caa70695da7e1d596819523768c9af70df3a9b281fe53a6e540d4b3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710482286284679274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-480837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6730385048f8d50fd6779c2eb51807,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e29ad281-d711-4dc3-ab9a-f1858d481d28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	953dcb999388c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   a3b0088a66d74       hello-world-app-5d77478584-zzpvx
	97a468dbf5d82       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   03bb3bac1cb8f       headlamp-5485c556b-bj5g6
	46173f89600c0       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   43680fbf9e980       nginx
	0e933f898f74f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   49941cabe2f29       gcp-auth-7d69788767-bf4sl
	5c9747fc0c66f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   b9f20bad9c9ed       ingress-nginx-admission-patch-447pd
	f524793a19a4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   6c9608c422040       ingress-nginx-admission-create-4gt5r
	c5293781330af       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   97cdf33fe28b1       yakd-dashboard-9947fc6bf-dw7wt
	e2a1ec0aa2a4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   0297812980de0       storage-provisioner
	693f9eab70b5e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   fa8b543d6df18       coredns-5dd5756b68-qkwrq
	b304d05db6bff       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   5086e06fd1496       kube-proxy-wdw4w
	e136a462a8f5d       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   03f69281a7c1c       kube-scheduler-addons-480837
	4780bae583ea9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   9979414dfefb5       kube-controller-manager-addons-480837
	f75e4e8114c28       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   0241c9f8caa70       kube-apiserver-addons-480837
	d7a1fd8b8e03c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   37c3234d0fe71       etcd-addons-480837
	
	
	==> coredns [693f9eab70b5ef16e299c62c3876c9bd3c3263de2a9e408a7129de9da09b97aa] <==
	[INFO] 10.244.0.7:35137 - 37135 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082539s
	[INFO] 10.244.0.7:50780 - 40590 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082169s
	[INFO] 10.244.0.7:50780 - 50572 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087317s
	[INFO] 10.244.0.7:42227 - 41576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067659s
	[INFO] 10.244.0.7:42227 - 28523 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086496s
	[INFO] 10.244.0.7:35018 - 50462 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089611s
	[INFO] 10.244.0.7:35018 - 42000 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000231196s
	[INFO] 10.244.0.7:34201 - 55265 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156974s
	[INFO] 10.244.0.7:34201 - 55010 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000232782s
	[INFO] 10.244.0.7:52839 - 19821 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037059s
	[INFO] 10.244.0.7:52839 - 40299 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000289947s
	[INFO] 10.244.0.7:48698 - 26759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043269s
	[INFO] 10.244.0.7:48698 - 22937 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034842s
	[INFO] 10.244.0.7:50823 - 7157 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042644s
	[INFO] 10.244.0.7:50823 - 58612 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002471s
	[INFO] 10.244.0.22:52932 - 60568 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000447034s
	[INFO] 10.244.0.22:32919 - 8857 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000937247s
	[INFO] 10.244.0.22:55308 - 28185 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000300691s
	[INFO] 10.244.0.22:52268 - 19838 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149794s
	[INFO] 10.244.0.22:45045 - 61442 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013538s
	[INFO] 10.244.0.22:42081 - 51088 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001299763s
	[INFO] 10.244.0.22:52954 - 53460 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000837075s
	[INFO] 10.244.0.22:40525 - 18574 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001231115s
	[INFO] 10.244.0.25:33311 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000503032s
	[INFO] 10.244.0.25:54919 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.002815282s
	
	
	==> describe nodes <==
	Name:               addons-480837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-480837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=addons-480837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T05_58_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-480837
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 05:58:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-480837
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:02:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:01:16 +0000   Fri, 15 Mar 2024 05:58:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:01:16 +0000   Fri, 15 Mar 2024 05:58:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:01:16 +0000   Fri, 15 Mar 2024 05:58:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:01:16 +0000   Fri, 15 Mar 2024 05:58:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    addons-480837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 dae4a618227448649ce62796cf013b4a
	  System UUID:                dae4a618-2274-4864-9ce6-2796cf013b4a
	  Boot ID:                    62d00cb6-db1d-4315-a519-1e503444d136
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zzpvx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  gcp-auth                    gcp-auth-7d69788767-bf4sl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  headlamp                    headlamp-5485c556b-bj5g6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 coredns-5dd5756b68-qkwrq                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m40s
	  kube-system                 etcd-addons-480837                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-apiserver-addons-480837             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-addons-480837    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-wdw4w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-scheduler-addons-480837             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-dw7wt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node addons-480837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node addons-480837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node addons-480837 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m52s                  kubelet          Node addons-480837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s                  kubelet          Node addons-480837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s                  kubelet          Node addons-480837 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m51s                  kubelet          Node addons-480837 status is now: NodeReady
	  Normal  RegisteredNode           4m41s                  node-controller  Node addons-480837 event: Registered Node addons-480837 in Controller
	
	
	==> dmesg <==
	[ +12.762494] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +0.034687] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.076323] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.122285] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.531288] kauditd_printk_skb: 66 callbacks suppressed
	[  +9.456361] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.291951] kauditd_printk_skb: 6 callbacks suppressed
	[Mar15 05:59] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.230208] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.390075] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.635102] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.552919] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.689385] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.028339] kauditd_printk_skb: 12 callbacks suppressed
	[Mar15 06:00] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.503112] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.570901] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.931904] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.715349] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.209431] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.628906] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.197818] kauditd_printk_skb: 15 callbacks suppressed
	[Mar15 06:01] kauditd_printk_skb: 25 callbacks suppressed
	[Mar15 06:02] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.673506] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [d7a1fd8b8e03c4f6190ff1cfb9df796c99b340b8ca9ec7ad353d2d626f3cb937] <==
	{"level":"info","ts":"2024-03-15T05:59:46.643719Z","caller":"traceutil/trace.go:171","msg":"trace[131994328] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1138; }","duration":"196.69348ms","start":"2024-03-15T05:59:46.447018Z","end":"2024-03-15T05:59:46.643712Z","steps":["trace[131994328] 'agreement among raft nodes before linearized reading'  (duration: 196.432124ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T05:59:53.371254Z","caller":"traceutil/trace.go:171","msg":"trace[384537507] linearizableReadLoop","detail":"{readStateIndex:1208; appliedIndex:1207; }","duration":"423.900874ms","start":"2024-03-15T05:59:52.947339Z","end":"2024-03-15T05:59:53.37124Z","steps":["trace[384537507] 'read index received'  (duration: 423.701337ms)","trace[384537507] 'applied index is now lower than readState.Index'  (duration: 198.818µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T05:59:53.371518Z","caller":"traceutil/trace.go:171","msg":"trace[719996803] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"461.769389ms","start":"2024-03-15T05:59:52.909737Z","end":"2024-03-15T05:59:53.371506Z","steps":["trace[719996803] 'process raft request'  (duration: 461.347466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T05:59:53.371872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T05:59:52.909721Z","time spent":"461.902761ms","remote":"127.0.0.1:47300","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1143 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-03-15T05:59:53.372162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"424.861776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81543"}
	{"level":"info","ts":"2024-03-15T05:59:53.372186Z","caller":"traceutil/trace.go:171","msg":"trace[1283345350] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1170; }","duration":"424.890073ms","start":"2024-03-15T05:59:52.947289Z","end":"2024-03-15T05:59:53.372179Z","steps":["trace[1283345350] 'agreement among raft nodes before linearized reading'  (duration: 424.750374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T05:59:53.372216Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T05:59:52.947232Z","time spent":"424.979237ms","remote":"127.0.0.1:47202","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81567,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-03-15T05:59:53.372409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.479135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-15T05:59:53.37243Z","caller":"traceutil/trace.go:171","msg":"trace[179011225] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1170; }","duration":"322.501176ms","start":"2024-03-15T05:59:53.049922Z","end":"2024-03-15T05:59:53.372423Z","steps":["trace[179011225] 'agreement among raft nodes before linearized reading'  (duration: 322.457889ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T05:59:53.372446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T05:59:53.049907Z","time spent":"322.535568ms","remote":"127.0.0.1:47184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-03-15T06:00:04.943416Z","caller":"traceutil/trace.go:171","msg":"trace[526139615] linearizableReadLoop","detail":"{readStateIndex:1292; appliedIndex:1291; }","duration":"196.045414ms","start":"2024-03-15T06:00:04.747355Z","end":"2024-03-15T06:00:04.9434Z","steps":["trace[526139615] 'read index received'  (duration: 195.836505ms)","trace[526139615] 'applied index is now lower than readState.Index'  (duration: 208.372µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T06:00:04.943491Z","caller":"traceutil/trace.go:171","msg":"trace[1024063557] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"268.320355ms","start":"2024-03-15T06:00:04.675161Z","end":"2024-03-15T06:00:04.943481Z","steps":["trace[1024063557] 'process raft request'  (duration: 268.071101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:00:04.943654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.387428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T06:00:04.943675Z","caller":"traceutil/trace.go:171","msg":"trace[1153696864] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1252; }","duration":"196.421219ms","start":"2024-03-15T06:00:04.747249Z","end":"2024-03-15T06:00:04.94367Z","steps":["trace[1153696864] 'agreement among raft nodes before linearized reading'  (duration: 196.3706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:00:04.943798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.30497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-03-15T06:00:04.943831Z","caller":"traceutil/trace.go:171","msg":"trace[861501122] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1252; }","duration":"196.343551ms","start":"2024-03-15T06:00:04.74748Z","end":"2024-03-15T06:00:04.943823Z","steps":["trace[861501122] 'agreement among raft nodes before linearized reading'  (duration: 196.280232ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T06:00:27.364285Z","caller":"traceutil/trace.go:171","msg":"trace[241810226] transaction","detail":"{read_only:false; response_revision:1477; number_of_response:1; }","duration":"357.414773ms","start":"2024-03-15T06:00:27.00684Z","end":"2024-03-15T06:00:27.364255Z","steps":["trace[241810226] 'process raft request'  (duration: 356.882664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:00:27.364424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T06:00:27.006827Z","time spent":"357.534617ms","remote":"127.0.0.1:47086","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":783,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-76dc478dd8-kbffm.17bcdaabbf773c69\" mod_revision:1460 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-76dc478dd8-kbffm.17bcdaabbf773c69\" value_size:676 lease:5093446112067997309 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-76dc478dd8-kbffm.17bcdaabbf773c69\" > >"}
	{"level":"info","ts":"2024-03-15T06:00:32.841993Z","caller":"traceutil/trace.go:171","msg":"trace[1404871380] linearizableReadLoop","detail":"{readStateIndex:1564; appliedIndex:1563; }","duration":"415.23722ms","start":"2024-03-15T06:00:32.426737Z","end":"2024-03-15T06:00:32.841974Z","steps":["trace[1404871380] 'read index received'  (duration: 415.028931ms)","trace[1404871380] 'applied index is now lower than readState.Index'  (duration: 207.291µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T06:00:32.842207Z","caller":"traceutil/trace.go:171","msg":"trace[197701368] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1507; }","duration":"420.986877ms","start":"2024-03-15T06:00:32.421211Z","end":"2024-03-15T06:00:32.842198Z","steps":["trace[197701368] 'process raft request'  (duration: 420.599849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:00:32.842462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T06:00:32.421199Z","time spent":"421.057894ms","remote":"127.0.0.1:47114","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":48,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/configmaps/gadget/kube-root-ca.crt\" mod_revision:618 > success:<request_delete_range:<key:\"/registry/configmaps/gadget/kube-root-ca.crt\" > > failure:<request_range:<key:\"/registry/configmaps/gadget/kube-root-ca.crt\" > >"}
	{"level":"warn","ts":"2024-03-15T06:00:32.843017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.296664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-15T06:00:32.844144Z","caller":"traceutil/trace.go:171","msg":"trace[741479871] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1507; }","duration":"417.427223ms","start":"2024-03-15T06:00:32.426706Z","end":"2024-03-15T06:00:32.844133Z","steps":["trace[741479871] 'agreement among raft nodes before linearized reading'  (duration: 416.277378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:00:32.844221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T06:00:32.426696Z","time spent":"417.508472ms","remote":"127.0.0.1:47290","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":31,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"info","ts":"2024-03-15T06:00:45.571044Z","caller":"traceutil/trace.go:171","msg":"trace[755211018] transaction","detail":"{read_only:false; response_revision:1599; number_of_response:1; }","duration":"189.441122ms","start":"2024-03-15T06:00:45.381524Z","end":"2024-03-15T06:00:45.570965Z","steps":["trace[755211018] 'process raft request'  (duration: 189.222341ms)"],"step_count":1}
	
	
	==> gcp-auth [0e933f898f74fd5e743261e2efa2f4ab321aa5486f2daaba079c5599d9e36471] <==
	2024/03/15 05:59:53 GCP Auth Webhook started!
	2024/03/15 05:59:58 Ready to marshal response ...
	2024/03/15 05:59:58 Ready to write response ...
	2024/03/15 05:59:58 Ready to marshal response ...
	2024/03/15 05:59:58 Ready to write response ...
	2024/03/15 06:00:09 Ready to marshal response ...
	2024/03/15 06:00:09 Ready to write response ...
	2024/03/15 06:00:10 Ready to marshal response ...
	2024/03/15 06:00:10 Ready to write response ...
	2024/03/15 06:00:14 Ready to marshal response ...
	2024/03/15 06:00:14 Ready to write response ...
	2024/03/15 06:00:21 Ready to marshal response ...
	2024/03/15 06:00:21 Ready to write response ...
	2024/03/15 06:00:23 Ready to marshal response ...
	2024/03/15 06:00:23 Ready to write response ...
	2024/03/15 06:00:34 Ready to marshal response ...
	2024/03/15 06:00:34 Ready to write response ...
	2024/03/15 06:00:34 Ready to marshal response ...
	2024/03/15 06:00:34 Ready to write response ...
	2024/03/15 06:00:34 Ready to marshal response ...
	2024/03/15 06:00:34 Ready to write response ...
	2024/03/15 06:00:49 Ready to marshal response ...
	2024/03/15 06:00:49 Ready to write response ...
	2024/03/15 06:02:52 Ready to marshal response ...
	2024/03/15 06:02:52 Ready to write response ...
	
	
	==> kernel <==
	 06:03:04 up 5 min,  0 users,  load average: 0.65, 1.44, 0.77
	Linux addons-480837 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f75e4e8114c28071ff8e79c737941ccacd621982f25201ecc27c8376d42c68cc] <==
	I0315 06:00:27.424941       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0315 06:00:27.447222       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0315 06:00:28.474909       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0315 06:00:30.481840       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0315 06:00:34.607193       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.14.101"}
	I0315 06:00:36.931589       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0315 06:00:54.463767       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0315 06:01:05.886113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.886209       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.903811       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.904166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.915250       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.915339       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.919388       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.919860       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.968661       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.968760       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.978887       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.978954       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 06:01:05.986327       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 06:01:05.986413       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0315 06:01:06.920803       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0315 06:01:06.987331       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0315 06:01:07.001641       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0315 06:02:53.153994       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.60.131"}
	
	
	==> kube-controller-manager [4780bae583ea92c213ef7510f24d850d249656d8e73f8d7b062d67c657fd3ea1] <==
	E0315 06:01:40.913264       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:01:43.082730       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:01:43.082783       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:02:07.096150       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:02:07.096277       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:02:14.841582       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:02:14.841720       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:02:25.393208       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:02:25.393309       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:02:37.168471       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:02:37.168688       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0315 06:02:52.955884       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0315 06:02:52.980103       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zzpvx"
	I0315 06:02:52.991868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.322822ms"
	I0315 06:02:53.039776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.777874ms"
	I0315 06:02:53.040326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="111.779µs"
	I0315 06:02:55.775164       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0315 06:02:55.778146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="58.849µs"
	I0315 06:02:55.785912       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0315 06:02:56.928477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.733136ms"
	I0315 06:02:56.930375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.138µs"
	W0315 06:02:58.514905       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:02:58.514968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 06:03:00.118306       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 06:03:00.118360       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [b304d05db6bff5c5dc3db5ac9efa09cb970a89f27608219d96c6307b321b1065] <==
	I0315 05:58:26.622425       1 server_others.go:69] "Using iptables proxy"
	I0315 05:58:26.640618       1 node.go:141] Successfully retrieved node IP: 192.168.39.159
	I0315 05:58:26.974042       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 05:58:26.974060       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 05:58:27.034079       1 server_others.go:152] "Using iptables Proxier"
	I0315 05:58:27.034114       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 05:58:27.034478       1 server.go:846] "Version info" version="v1.28.4"
	I0315 05:58:27.034499       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 05:58:27.039776       1 config.go:188] "Starting service config controller"
	I0315 05:58:27.039796       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 05:58:27.039815       1 config.go:97] "Starting endpoint slice config controller"
	I0315 05:58:27.039818       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 05:58:27.055876       1 config.go:315] "Starting node config controller"
	I0315 05:58:27.081725       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 05:58:27.265678       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 05:58:27.265730       1 shared_informer.go:318] Caches are synced for service config
	I0315 05:58:27.283418       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e136a462a8f5d8ad3d9114693b9dabba32f4eba73160cad0189225e4b1ccba58] <==
	W0315 05:58:09.129182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 05:58:09.129224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 05:58:09.129513       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 05:58:09.129612       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 05:58:09.962678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 05:58:09.962706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 05:58:09.997141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 05:58:09.997191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 05:58:10.025336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 05:58:10.025428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 05:58:10.030163       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 05:58:10.030850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 05:58:10.098805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 05:58:10.098899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 05:58:10.136081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 05:58:10.136182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 05:58:10.177943       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 05:58:10.177988       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 05:58:10.198702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 05:58:10.198747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 05:58:10.200058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 05:58:10.200100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 05:58:10.402960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 05:58:10.403011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0315 05:58:12.607241       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.006479    1274 memory_manager.go:346] "RemoveStaleState removing state" podUID="b2adac73-7cd0-4584-9f3f-671f369cb8e3" containerName="hostpath"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.006514    1274 memory_manager.go:346] "RemoveStaleState removing state" podUID="b2adac73-7cd0-4584-9f3f-671f369cb8e3" containerName="liveness-probe"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.006608    1274 memory_manager.go:346] "RemoveStaleState removing state" podUID="b2adac73-7cd0-4584-9f3f-671f369cb8e3" containerName="csi-external-health-monitor-controller"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.006645    1274 memory_manager.go:346] "RemoveStaleState removing state" podUID="0e916c03-232a-4c31-95c8-0a7c4fe02e5c" containerName="task-pv-container"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.006748    1274 memory_manager.go:346] "RemoveStaleState removing state" podUID="f7ff4eb1-03a5-4e06-b74b-695154a55751" containerName="volume-snapshot-controller"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.126336    1274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57rh\" (UniqueName: \"kubernetes.io/projected/d4f6f023-f61f-4d11-8447-580b875fe665-kube-api-access-t57rh\") pod \"hello-world-app-5d77478584-zzpvx\" (UID: \"d4f6f023-f61f-4d11-8447-580b875fe665\") " pod="default/hello-world-app-5d77478584-zzpvx"
	Mar 15 06:02:53 addons-480837 kubelet[1274]: I0315 06:02:53.126445    1274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d4f6f023-f61f-4d11-8447-580b875fe665-gcp-creds\") pod \"hello-world-app-5d77478584-zzpvx\" (UID: \"d4f6f023-f61f-4d11-8447-580b875fe665\") " pod="default/hello-world-app-5d77478584-zzpvx"
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.134940    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2tx5\" (UniqueName: \"kubernetes.io/projected/4c71f38e-185d-45d5-8176-d657c024205c-kube-api-access-x2tx5\") pod \"4c71f38e-185d-45d5-8176-d657c024205c\" (UID: \"4c71f38e-185d-45d5-8176-d657c024205c\") "
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.139231    1274 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c71f38e-185d-45d5-8176-d657c024205c-kube-api-access-x2tx5" (OuterVolumeSpecName: "kube-api-access-x2tx5") pod "4c71f38e-185d-45d5-8176-d657c024205c" (UID: "4c71f38e-185d-45d5-8176-d657c024205c"). InnerVolumeSpecName "kube-api-access-x2tx5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.236005    1274 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x2tx5\" (UniqueName: \"kubernetes.io/projected/4c71f38e-185d-45d5-8176-d657c024205c-kube-api-access-x2tx5\") on node \"addons-480837\" DevicePath \"\""
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.855723    1274 scope.go:117] "RemoveContainer" containerID="f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5"
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.896278    1274 scope.go:117] "RemoveContainer" containerID="f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5"
	Mar 15 06:02:54 addons-480837 kubelet[1274]: E0315 06:02:54.897394    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5\": container with ID starting with f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5 not found: ID does not exist" containerID="f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5"
	Mar 15 06:02:54 addons-480837 kubelet[1274]: I0315 06:02:54.897469    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5"} err="failed to get container status \"f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5\": rpc error: code = NotFound desc = could not find container \"f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5\": container with ID starting with f4613375857f6fd4fa518e64dd4d8bf2e341f7aa6e680201575bbb5cd12896a5 not found: ID does not exist"
	Mar 15 06:02:56 addons-480837 kubelet[1274]: I0315 06:02:56.666214    1274 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="296967d2-1839-4187-8cbf-5a608a1418de" path="/var/lib/kubelet/pods/296967d2-1839-4187-8cbf-5a608a1418de/volumes"
	Mar 15 06:02:56 addons-480837 kubelet[1274]: I0315 06:02:56.667782    1274 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4c71f38e-185d-45d5-8176-d657c024205c" path="/var/lib/kubelet/pods/4c71f38e-185d-45d5-8176-d657c024205c/volumes"
	Mar 15 06:02:56 addons-480837 kubelet[1274]: I0315 06:02:56.668183    1274 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a67cb70b-7840-4c82-ac7f-630326c9fc77" path="/var/lib/kubelet/pods/a67cb70b-7840-4c82-ac7f-630326c9fc77/volumes"
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.073940    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15a0168f-4959-45d8-802d-e770ab16e80f-webhook-cert\") pod \"15a0168f-4959-45d8-802d-e770ab16e80f\" (UID: \"15a0168f-4959-45d8-802d-e770ab16e80f\") "
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.074020    1274 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdcx\" (UniqueName: \"kubernetes.io/projected/15a0168f-4959-45d8-802d-e770ab16e80f-kube-api-access-sbdcx\") pod \"15a0168f-4959-45d8-802d-e770ab16e80f\" (UID: \"15a0168f-4959-45d8-802d-e770ab16e80f\") "
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.076424    1274 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a0168f-4959-45d8-802d-e770ab16e80f-kube-api-access-sbdcx" (OuterVolumeSpecName: "kube-api-access-sbdcx") pod "15a0168f-4959-45d8-802d-e770ab16e80f" (UID: "15a0168f-4959-45d8-802d-e770ab16e80f"). InnerVolumeSpecName "kube-api-access-sbdcx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.077123    1274 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a0168f-4959-45d8-802d-e770ab16e80f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "15a0168f-4959-45d8-802d-e770ab16e80f" (UID: "15a0168f-4959-45d8-802d-e770ab16e80f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.175172    1274 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sbdcx\" (UniqueName: \"kubernetes.io/projected/15a0168f-4959-45d8-802d-e770ab16e80f-kube-api-access-sbdcx\") on node \"addons-480837\" DevicePath \"\""
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.175240    1274 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15a0168f-4959-45d8-802d-e770ab16e80f-webhook-cert\") on node \"addons-480837\" DevicePath \"\""
	Mar 15 06:02:59 addons-480837 kubelet[1274]: I0315 06:02:59.933699    1274 scope.go:117] "RemoveContainer" containerID="6757cb93ea4b80b8e77e6f03a003817fa1c0fece8e5f111843af1ca150a8b8be"
	Mar 15 06:03:00 addons-480837 kubelet[1274]: I0315 06:03:00.664976    1274 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="15a0168f-4959-45d8-802d-e770ab16e80f" path="/var/lib/kubelet/pods/15a0168f-4959-45d8-802d-e770ab16e80f/volumes"
	
	
	==> storage-provisioner [e2a1ec0aa2a4f76dae82c96d12c3b70396a2bd6682858be86eedd767343b6da5] <==
	I0315 05:58:34.785134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 05:58:34.899226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 05:58:34.899270       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 05:58:34.991805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 05:58:34.991967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-480837_055b3b54-a6fa-46db-a995-a177917f24ec!
	I0315 05:58:34.991999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"daa93012-6d41-437f-a298-d1366b3ee50a", APIVersion:"v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-480837_055b3b54-a6fa-46db-a995-a177917f24ec became leader
	I0315 05:58:35.093876       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-480837_055b3b54-a6fa-46db-a995-a177917f24ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-480837 -n addons-480837
helpers_test.go:261: (dbg) Run:  kubectl --context addons-480837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (161.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-480837
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-480837: exit status 82 (2m0.489035869s)

                                                
                                                
-- stdout --
	* Stopping node "addons-480837"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-480837" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-480837
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-480837: exit status 11 (21.697333148s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-480837" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-480837
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-480837: exit status 11 (6.143804787s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-480837" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-480837
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-480837: exit status 11 (6.142821218s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-480837" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 node stop m02 -v=7 --alsologtostderr
E0315 06:15:26.216336   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:15:42.994501   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:17:04.915263   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.50463608s)

                                                
                                                
-- stdout --
	* Stopping node "ha-866665-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:15:18.577117   28884 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:15:18.577299   28884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:15:18.577311   28884 out.go:304] Setting ErrFile to fd 2...
	I0315 06:15:18.577317   28884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:15:18.577536   28884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:15:18.577882   28884 mustload.go:65] Loading cluster: ha-866665
	I0315 06:15:18.578279   28884 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:15:18.578299   28884 stop.go:39] StopHost: ha-866665-m02
	I0315 06:15:18.578762   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:15:18.578830   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:15:18.597358   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0315 06:15:18.597959   28884 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:15:18.598773   28884 main.go:141] libmachine: Using API Version  1
	I0315 06:15:18.598801   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:15:18.599236   28884 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:15:18.602266   28884 out.go:177] * Stopping node "ha-866665-m02"  ...
	I0315 06:15:18.605544   28884 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 06:15:18.605619   28884 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:15:18.606016   28884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 06:15:18.606057   28884 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:15:18.610049   28884 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:15:18.610659   28884 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:15:18.610698   28884 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:15:18.610918   28884 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:15:18.611138   28884 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:15:18.611318   28884 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:15:18.611483   28884 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:15:18.710580   28884 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 06:15:18.768273   28884 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 06:15:18.824752   28884 main.go:141] libmachine: Stopping "ha-866665-m02"...
	I0315 06:15:18.824798   28884 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:15:18.826522   28884 main.go:141] libmachine: (ha-866665-m02) Calling .Stop
	I0315 06:15:18.830347   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 0/120
	I0315 06:15:19.831813   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 1/120
	I0315 06:15:20.833155   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 2/120
	I0315 06:15:21.834266   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 3/120
	I0315 06:15:22.835779   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 4/120
	I0315 06:15:23.836961   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 5/120
	I0315 06:15:24.838417   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 6/120
	I0315 06:15:25.839769   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 7/120
	I0315 06:15:26.841215   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 8/120
	I0315 06:15:27.842732   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 9/120
	I0315 06:15:28.844460   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 10/120
	I0315 06:15:29.846152   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 11/120
	I0315 06:15:30.847396   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 12/120
	I0315 06:15:31.849045   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 13/120
	I0315 06:15:32.850828   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 14/120
	I0315 06:15:33.852945   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 15/120
	I0315 06:15:34.855017   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 16/120
	I0315 06:15:35.856634   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 17/120
	I0315 06:15:36.858062   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 18/120
	I0315 06:15:37.859420   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 19/120
	I0315 06:15:38.861898   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 20/120
	I0315 06:15:39.863459   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 21/120
	I0315 06:15:40.864954   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 22/120
	I0315 06:15:41.866799   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 23/120
	I0315 06:15:42.868191   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 24/120
	I0315 06:15:43.869986   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 25/120
	I0315 06:15:44.871396   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 26/120
	I0315 06:15:45.873633   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 27/120
	I0315 06:15:46.875013   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 28/120
	I0315 06:15:47.876859   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 29/120
	I0315 06:15:48.878823   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 30/120
	I0315 06:15:49.880141   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 31/120
	I0315 06:15:50.881662   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 32/120
	I0315 06:15:51.883129   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 33/120
	I0315 06:15:52.884551   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 34/120
	I0315 06:15:53.886673   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 35/120
	I0315 06:15:54.888132   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 36/120
	I0315 06:15:55.889595   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 37/120
	I0315 06:15:56.890878   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 38/120
	I0315 06:15:57.892253   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 39/120
	I0315 06:15:58.894599   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 40/120
	I0315 06:15:59.896102   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 41/120
	I0315 06:16:00.897948   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 42/120
	I0315 06:16:01.899442   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 43/120
	I0315 06:16:02.901074   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 44/120
	I0315 06:16:03.902532   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 45/120
	I0315 06:16:04.904261   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 46/120
	I0315 06:16:05.905801   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 47/120
	I0315 06:16:06.907375   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 48/120
	I0315 06:16:07.908677   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 49/120
	I0315 06:16:08.910643   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 50/120
	I0315 06:16:09.912726   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 51/120
	I0315 06:16:10.914972   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 52/120
	I0315 06:16:11.916246   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 53/120
	I0315 06:16:12.917688   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 54/120
	I0315 06:16:13.919947   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 55/120
	I0315 06:16:14.921193   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 56/120
	I0315 06:16:15.922615   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 57/120
	I0315 06:16:16.923980   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 58/120
	I0315 06:16:17.925426   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 59/120
	I0315 06:16:18.927054   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 60/120
	I0315 06:16:19.928493   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 61/120
	I0315 06:16:20.929928   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 62/120
	I0315 06:16:21.931542   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 63/120
	I0315 06:16:22.933625   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 64/120
	I0315 06:16:23.934914   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 65/120
	I0315 06:16:24.937364   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 66/120
	I0315 06:16:25.938883   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 67/120
	I0315 06:16:26.940178   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 68/120
	I0315 06:16:27.941468   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 69/120
	I0315 06:16:28.943802   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 70/120
	I0315 06:16:29.945373   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 71/120
	I0315 06:16:30.947481   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 72/120
	I0315 06:16:31.948888   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 73/120
	I0315 06:16:32.950835   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 74/120
	I0315 06:16:33.952932   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 75/120
	I0315 06:16:34.954401   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 76/120
	I0315 06:16:35.955941   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 77/120
	I0315 06:16:36.957805   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 78/120
	I0315 06:16:37.959111   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 79/120
	I0315 06:16:38.961322   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 80/120
	I0315 06:16:39.962577   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 81/120
	I0315 06:16:40.963907   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 82/120
	I0315 06:16:41.965465   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 83/120
	I0315 06:16:42.966880   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 84/120
	I0315 06:16:43.968496   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 85/120
	I0315 06:16:44.969825   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 86/120
	I0315 06:16:45.971198   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 87/120
	I0315 06:16:46.972537   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 88/120
	I0315 06:16:47.973986   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 89/120
	I0315 06:16:48.976232   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 90/120
	I0315 06:16:49.977908   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 91/120
	I0315 06:16:50.979182   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 92/120
	I0315 06:16:51.980663   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 93/120
	I0315 06:16:52.981875   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 94/120
	I0315 06:16:53.983567   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 95/120
	I0315 06:16:54.985166   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 96/120
	I0315 06:16:55.987070   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 97/120
	I0315 06:16:56.988523   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 98/120
	I0315 06:16:57.989812   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 99/120
	I0315 06:16:58.991960   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 100/120
	I0315 06:16:59.993267   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 101/120
	I0315 06:17:00.995058   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 102/120
	I0315 06:17:01.996306   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 103/120
	I0315 06:17:02.997884   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 104/120
	I0315 06:17:03.999096   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 105/120
	I0315 06:17:05.000822   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 106/120
	I0315 06:17:06.002236   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 107/120
	I0315 06:17:07.003862   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 108/120
	I0315 06:17:08.005218   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 109/120
	I0315 06:17:09.007168   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 110/120
	I0315 06:17:10.008596   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 111/120
	I0315 06:17:11.011272   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 112/120
	I0315 06:17:12.012704   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 113/120
	I0315 06:17:13.014129   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 114/120
	I0315 06:17:14.015528   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 115/120
	I0315 06:17:15.016861   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 116/120
	I0315 06:17:16.018960   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 117/120
	I0315 06:17:17.020305   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 118/120
	I0315 06:17:18.021829   28884 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 119/120
	I0315 06:17:19.022433   28884 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 06:17:19.022571   28884 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-866665 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (19.054253559s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:17:19.079903   29188 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:17:19.080045   29188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:19.080055   29188 out.go:304] Setting ErrFile to fd 2...
	I0315 06:17:19.080059   29188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:19.080288   29188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:17:19.080457   29188 out.go:298] Setting JSON to false
	I0315 06:17:19.080503   29188 mustload.go:65] Loading cluster: ha-866665
	I0315 06:17:19.080634   29188 notify.go:220] Checking for updates...
	I0315 06:17:19.080922   29188 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:17:19.080942   29188 status.go:255] checking status of ha-866665 ...
	I0315 06:17:19.081397   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.081468   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.097342   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0315 06:17:19.097808   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.098406   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.098424   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.098811   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.099016   29188 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:17:19.100616   29188 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:17:19.100631   29188 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:19.101017   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.101064   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.115346   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0315 06:17:19.115763   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.116196   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.116215   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.116497   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.116667   29188 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:17:19.119197   29188 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:19.119636   29188 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:19.119661   29188 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:19.119819   29188 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:19.120094   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.120135   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.135062   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0315 06:17:19.135524   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.135974   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.135995   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.136312   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.136502   29188 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:17:19.136698   29188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:19.136729   29188 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:17:19.139068   29188 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:19.139508   29188 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:19.139545   29188 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:19.139700   29188 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:17:19.139871   29188 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:17:19.140090   29188 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:17:19.140214   29188 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:17:19.225327   29188 ssh_runner.go:195] Run: systemctl --version
	I0315 06:17:19.233873   29188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:19.253731   29188 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:19.253755   29188 api_server.go:166] Checking apiserver status ...
	I0315 06:17:19.253784   29188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:19.270471   29188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:17:19.280421   29188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:19.280499   29188 ssh_runner.go:195] Run: ls
	I0315 06:17:19.286201   29188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:19.291049   29188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:19.291075   29188 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:17:19.291084   29188 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:19.291099   29188 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:17:19.291370   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.291407   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.306585   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0315 06:17:19.306964   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.307430   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.307460   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.307806   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.308023   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:17:19.309586   29188 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:17:19.309603   29188 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:19.309882   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.309919   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.325512   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0315 06:17:19.325898   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.326421   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.326447   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.326825   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.327001   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:17:19.329331   29188 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:19.329709   29188 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:19.329734   29188 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:19.329827   29188 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:19.330117   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:19.330152   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:19.346268   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0315 06:17:19.346690   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:19.347169   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:19.347189   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:19.347496   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:19.347766   29188 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:17:19.347980   29188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:19.348012   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:17:19.351157   29188 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:19.351637   29188 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:19.351659   29188 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:19.351786   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:17:19.351955   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:17:19.352113   29188 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:17:19.352293   29188 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:17:37.700887   29188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:37.701004   29188 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:17:37.701022   29188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:37.701029   29188 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:17:37.701051   29188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:37.701063   29188 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:17:37.701348   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.701386   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.716005   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0315 06:17:37.716398   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.716865   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.716886   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.717204   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.717382   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:17:37.718979   29188 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:17:37.719000   29188 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:37.719326   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.719370   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.733460   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0315 06:17:37.734212   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.734817   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.734836   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.735273   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.735592   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:17:37.738415   29188 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:37.738898   29188 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:37.738932   29188 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:37.738993   29188 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:37.739273   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.739316   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.753624   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0315 06:17:37.753996   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.754466   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.754492   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.754817   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.754990   29188 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:17:37.755175   29188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:37.755197   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:17:37.757819   29188 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:37.758271   29188 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:37.758298   29188 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:37.758418   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:17:37.758594   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:17:37.758778   29188 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:17:37.758916   29188 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:17:37.850255   29188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:37.868980   29188 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:37.869004   29188 api_server.go:166] Checking apiserver status ...
	I0315 06:17:37.869035   29188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:37.886598   29188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:17:37.897232   29188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:37.897314   29188 ssh_runner.go:195] Run: ls
	I0315 06:17:37.902547   29188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:37.907492   29188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:37.907514   29188 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:17:37.907523   29188 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:37.907554   29188 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:17:37.907860   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.907903   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.922377   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0315 06:17:37.922808   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.923261   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.923280   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.923634   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.923861   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:17:37.925644   29188 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:17:37.925665   29188 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:37.926033   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.926071   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.944339   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I0315 06:17:37.944792   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.945265   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.945288   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.945651   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.945826   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:17:37.948523   29188 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:37.948997   29188 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:37.949020   29188 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:37.949175   29188 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:37.949470   29188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:37.949513   29188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:37.965374   29188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0315 06:17:37.965832   29188 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:37.966313   29188 main.go:141] libmachine: Using API Version  1
	I0315 06:17:37.966336   29188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:37.966621   29188 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:37.966796   29188 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:17:37.966979   29188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:37.967000   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:17:37.969683   29188 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:37.970099   29188 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:37.970139   29188 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:37.970240   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:17:37.970405   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:17:37.970599   29188 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:17:37.970734   29188 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:17:38.058010   29188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:38.076700   29188 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.473402449s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m03_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:10:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:10:22.050431   25161 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:10:22.050872   25161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:10:22.050889   25161 out.go:304] Setting ErrFile to fd 2...
	I0315 06:10:22.050896   25161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:10:22.051363   25161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:10:22.052191   25161 out.go:298] Setting JSON to false
	I0315 06:10:22.053167   25161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3118,"bootTime":1710479904,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:10:22.053231   25161 start.go:139] virtualization: kvm guest
	I0315 06:10:22.055390   25161 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:10:22.057035   25161 notify.go:220] Checking for updates...
	I0315 06:10:22.057040   25161 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:10:22.058646   25161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:10:22.060128   25161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:10:22.061381   25161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.062639   25161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:10:22.063930   25161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:10:22.065416   25161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:10:22.098997   25161 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 06:10:22.100271   25161 start.go:297] selected driver: kvm2
	I0315 06:10:22.100298   25161 start.go:901] validating driver "kvm2" against <nil>
	I0315 06:10:22.100318   25161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:10:22.101110   25161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:10:22.101216   25161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:10:22.115761   25161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:10:22.115811   25161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 06:10:22.116046   25161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:10:22.116119   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:10:22.116135   25161 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0315 06:10:22.116145   25161 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 06:10:22.116207   25161 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0315 06:10:22.116318   25161 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:10:22.118196   25161 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:10:22.119336   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:10:22.119381   25161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:10:22.119390   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:10:22.119491   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:10:22.119504   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:10:22.119818   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:10:22.119846   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json: {Name:mke78c2b04ea85297521b7aca846449b5918be83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:22.119987   25161 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:10:22.120042   25161 start.go:364] duration metric: took 38.981µs to acquireMachinesLock for "ha-866665"
	I0315 06:10:22.120069   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:10:22.120175   25161 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 06:10:22.122009   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:10:22.122157   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:10:22.122201   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:10:22.136061   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0315 06:10:22.136495   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:10:22.137081   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:10:22.137108   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:10:22.137486   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:10:22.137695   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:22.137851   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:22.138011   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:10:22.138044   25161 client.go:168] LocalClient.Create starting
	I0315 06:10:22.138078   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:10:22.138111   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:10:22.138127   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:10:22.138179   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:10:22.138196   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:10:22.138209   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:10:22.138224   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:10:22.138236   25161 main.go:141] libmachine: (ha-866665) Calling .PreCreateCheck
	I0315 06:10:22.138543   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:22.138903   25161 main.go:141] libmachine: Creating machine...
	I0315 06:10:22.138916   25161 main.go:141] libmachine: (ha-866665) Calling .Create
	I0315 06:10:22.139046   25161 main.go:141] libmachine: (ha-866665) Creating KVM machine...
	I0315 06:10:22.140180   25161 main.go:141] libmachine: (ha-866665) DBG | found existing default KVM network
	I0315 06:10:22.140833   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.140700   25184 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0315 06:10:22.140858   25161 main.go:141] libmachine: (ha-866665) DBG | created network xml: 
	I0315 06:10:22.140875   25161 main.go:141] libmachine: (ha-866665) DBG | <network>
	I0315 06:10:22.140886   25161 main.go:141] libmachine: (ha-866665) DBG |   <name>mk-ha-866665</name>
	I0315 06:10:22.140895   25161 main.go:141] libmachine: (ha-866665) DBG |   <dns enable='no'/>
	I0315 06:10:22.140905   25161 main.go:141] libmachine: (ha-866665) DBG |   
	I0315 06:10:22.140916   25161 main.go:141] libmachine: (ha-866665) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 06:10:22.140925   25161 main.go:141] libmachine: (ha-866665) DBG |     <dhcp>
	I0315 06:10:22.140942   25161 main.go:141] libmachine: (ha-866665) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 06:10:22.140961   25161 main.go:141] libmachine: (ha-866665) DBG |     </dhcp>
	I0315 06:10:22.140973   25161 main.go:141] libmachine: (ha-866665) DBG |   </ip>
	I0315 06:10:22.140982   25161 main.go:141] libmachine: (ha-866665) DBG |   
	I0315 06:10:22.141038   25161 main.go:141] libmachine: (ha-866665) DBG | </network>
	I0315 06:10:22.141057   25161 main.go:141] libmachine: (ha-866665) DBG | 
	I0315 06:10:22.146019   25161 main.go:141] libmachine: (ha-866665) DBG | trying to create private KVM network mk-ha-866665 192.168.39.0/24...
	I0315 06:10:22.213307   25161 main.go:141] libmachine: (ha-866665) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 ...
	I0315 06:10:22.213341   25161 main.go:141] libmachine: (ha-866665) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:10:22.213352   25161 main.go:141] libmachine: (ha-866665) DBG | private KVM network mk-ha-866665 192.168.39.0/24 created
	I0315 06:10:22.213370   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.213251   25184 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.213430   25161 main.go:141] libmachine: (ha-866665) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:10:22.435287   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.435157   25184 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa...
	I0315 06:10:22.563588   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.563463   25184 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/ha-866665.rawdisk...
	I0315 06:10:22.563613   25161 main.go:141] libmachine: (ha-866665) DBG | Writing magic tar header
	I0315 06:10:22.563624   25161 main.go:141] libmachine: (ha-866665) DBG | Writing SSH key tar header
	I0315 06:10:22.563654   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.563616   25184 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 ...
	I0315 06:10:22.563778   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 (perms=drwx------)
	I0315 06:10:22.563798   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665
	I0315 06:10:22.563809   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:10:22.563823   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:10:22.563834   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:10:22.563844   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:10:22.563858   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.563867   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:10:22.563879   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:10:22.563886   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:10:22.563897   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:10:22.563908   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home
	I0315 06:10:22.563924   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:10:22.563938   25161 main.go:141] libmachine: (ha-866665) DBG | Skipping /home - not owner
	I0315 06:10:22.563950   25161 main.go:141] libmachine: (ha-866665) Creating domain...
	I0315 06:10:22.565044   25161 main.go:141] libmachine: (ha-866665) define libvirt domain using xml: 
	I0315 06:10:22.565069   25161 main.go:141] libmachine: (ha-866665) <domain type='kvm'>
	I0315 06:10:22.565079   25161 main.go:141] libmachine: (ha-866665)   <name>ha-866665</name>
	I0315 06:10:22.565087   25161 main.go:141] libmachine: (ha-866665)   <memory unit='MiB'>2200</memory>
	I0315 06:10:22.565095   25161 main.go:141] libmachine: (ha-866665)   <vcpu>2</vcpu>
	I0315 06:10:22.565105   25161 main.go:141] libmachine: (ha-866665)   <features>
	I0315 06:10:22.565111   25161 main.go:141] libmachine: (ha-866665)     <acpi/>
	I0315 06:10:22.565117   25161 main.go:141] libmachine: (ha-866665)     <apic/>
	I0315 06:10:22.565123   25161 main.go:141] libmachine: (ha-866665)     <pae/>
	I0315 06:10:22.565138   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565146   25161 main.go:141] libmachine: (ha-866665)   </features>
	I0315 06:10:22.565151   25161 main.go:141] libmachine: (ha-866665)   <cpu mode='host-passthrough'>
	I0315 06:10:22.565159   25161 main.go:141] libmachine: (ha-866665)   
	I0315 06:10:22.565167   25161 main.go:141] libmachine: (ha-866665)   </cpu>
	I0315 06:10:22.565197   25161 main.go:141] libmachine: (ha-866665)   <os>
	I0315 06:10:22.565221   25161 main.go:141] libmachine: (ha-866665)     <type>hvm</type>
	I0315 06:10:22.565236   25161 main.go:141] libmachine: (ha-866665)     <boot dev='cdrom'/>
	I0315 06:10:22.565247   25161 main.go:141] libmachine: (ha-866665)     <boot dev='hd'/>
	I0315 06:10:22.565261   25161 main.go:141] libmachine: (ha-866665)     <bootmenu enable='no'/>
	I0315 06:10:22.565271   25161 main.go:141] libmachine: (ha-866665)   </os>
	I0315 06:10:22.565282   25161 main.go:141] libmachine: (ha-866665)   <devices>
	I0315 06:10:22.565298   25161 main.go:141] libmachine: (ha-866665)     <disk type='file' device='cdrom'>
	I0315 06:10:22.565315   25161 main.go:141] libmachine: (ha-866665)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/boot2docker.iso'/>
	I0315 06:10:22.565327   25161 main.go:141] libmachine: (ha-866665)       <target dev='hdc' bus='scsi'/>
	I0315 06:10:22.565340   25161 main.go:141] libmachine: (ha-866665)       <readonly/>
	I0315 06:10:22.565350   25161 main.go:141] libmachine: (ha-866665)     </disk>
	I0315 06:10:22.565361   25161 main.go:141] libmachine: (ha-866665)     <disk type='file' device='disk'>
	I0315 06:10:22.565374   25161 main.go:141] libmachine: (ha-866665)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:10:22.565389   25161 main.go:141] libmachine: (ha-866665)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/ha-866665.rawdisk'/>
	I0315 06:10:22.565402   25161 main.go:141] libmachine: (ha-866665)       <target dev='hda' bus='virtio'/>
	I0315 06:10:22.565410   25161 main.go:141] libmachine: (ha-866665)     </disk>
	I0315 06:10:22.565423   25161 main.go:141] libmachine: (ha-866665)     <interface type='network'>
	I0315 06:10:22.565448   25161 main.go:141] libmachine: (ha-866665)       <source network='mk-ha-866665'/>
	I0315 06:10:22.565461   25161 main.go:141] libmachine: (ha-866665)       <model type='virtio'/>
	I0315 06:10:22.565477   25161 main.go:141] libmachine: (ha-866665)     </interface>
	I0315 06:10:22.565489   25161 main.go:141] libmachine: (ha-866665)     <interface type='network'>
	I0315 06:10:22.565498   25161 main.go:141] libmachine: (ha-866665)       <source network='default'/>
	I0315 06:10:22.565511   25161 main.go:141] libmachine: (ha-866665)       <model type='virtio'/>
	I0315 06:10:22.565542   25161 main.go:141] libmachine: (ha-866665)     </interface>
	I0315 06:10:22.565563   25161 main.go:141] libmachine: (ha-866665)     <serial type='pty'>
	I0315 06:10:22.565576   25161 main.go:141] libmachine: (ha-866665)       <target port='0'/>
	I0315 06:10:22.565586   25161 main.go:141] libmachine: (ha-866665)     </serial>
	I0315 06:10:22.565596   25161 main.go:141] libmachine: (ha-866665)     <console type='pty'>
	I0315 06:10:22.565613   25161 main.go:141] libmachine: (ha-866665)       <target type='serial' port='0'/>
	I0315 06:10:22.565631   25161 main.go:141] libmachine: (ha-866665)     </console>
	I0315 06:10:22.565642   25161 main.go:141] libmachine: (ha-866665)     <rng model='virtio'>
	I0315 06:10:22.565654   25161 main.go:141] libmachine: (ha-866665)       <backend model='random'>/dev/random</backend>
	I0315 06:10:22.565664   25161 main.go:141] libmachine: (ha-866665)     </rng>
	I0315 06:10:22.565672   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565686   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565698   25161 main.go:141] libmachine: (ha-866665)   </devices>
	I0315 06:10:22.565708   25161 main.go:141] libmachine: (ha-866665) </domain>
	I0315 06:10:22.565719   25161 main.go:141] libmachine: (ha-866665) 
	I0315 06:10:22.569993   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:ff:88:6e in network default
	I0315 06:10:22.570558   25161 main.go:141] libmachine: (ha-866665) Ensuring networks are active...
	I0315 06:10:22.570582   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:22.571265   25161 main.go:141] libmachine: (ha-866665) Ensuring network default is active
	I0315 06:10:22.571537   25161 main.go:141] libmachine: (ha-866665) Ensuring network mk-ha-866665 is active
	I0315 06:10:22.572033   25161 main.go:141] libmachine: (ha-866665) Getting domain xml...
	I0315 06:10:22.572727   25161 main.go:141] libmachine: (ha-866665) Creating domain...
	I0315 06:10:23.736605   25161 main.go:141] libmachine: (ha-866665) Waiting to get IP...
	I0315 06:10:23.737432   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:23.737824   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:23.737851   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:23.737801   25184 retry.go:31] will retry after 269.541809ms: waiting for machine to come up
	I0315 06:10:24.009421   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.009981   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.009999   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.009946   25184 retry.go:31] will retry after 355.494322ms: waiting for machine to come up
	I0315 06:10:24.367853   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.368348   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.368367   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.368297   25184 retry.go:31] will retry after 469.840562ms: waiting for machine to come up
	I0315 06:10:24.839880   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.840325   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.840353   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.840295   25184 retry.go:31] will retry after 509.329258ms: waiting for machine to come up
	I0315 06:10:25.351724   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:25.352604   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:25.352629   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:25.352542   25184 retry.go:31] will retry after 724.359107ms: waiting for machine to come up
	I0315 06:10:26.078398   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:26.078770   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:26.078790   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:26.078744   25184 retry.go:31] will retry after 572.771794ms: waiting for machine to come up
	I0315 06:10:26.653590   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:26.654002   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:26.654048   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:26.653957   25184 retry.go:31] will retry after 964.305506ms: waiting for machine to come up
	I0315 06:10:27.619838   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:27.620282   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:27.620316   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:27.620240   25184 retry.go:31] will retry after 1.385577587s: waiting for machine to come up
	I0315 06:10:29.007802   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:29.008244   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:29.008273   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:29.008187   25184 retry.go:31] will retry after 1.288467263s: waiting for machine to come up
	I0315 06:10:30.298780   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:30.299311   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:30.299349   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:30.299245   25184 retry.go:31] will retry after 2.203379402s: waiting for machine to come up
	I0315 06:10:32.503823   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:32.504208   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:32.504234   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:32.504159   25184 retry.go:31] will retry after 2.163155246s: waiting for machine to come up
	I0315 06:10:34.670370   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:34.670822   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:34.670846   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:34.670779   25184 retry.go:31] will retry after 2.490179724s: waiting for machine to come up
	I0315 06:10:37.162916   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:37.163316   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:37.163344   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:37.163272   25184 retry.go:31] will retry after 4.132551358s: waiting for machine to come up
	I0315 06:10:41.300521   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:41.300982   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:41.301009   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:41.300940   25184 retry.go:31] will retry after 4.068921352s: waiting for machine to come up
	I0315 06:10:45.374044   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.374464   25161 main.go:141] libmachine: (ha-866665) Found IP for machine: 192.168.39.78
	I0315 06:10:45.374481   25161 main.go:141] libmachine: (ha-866665) Reserving static IP address...
	I0315 06:10:45.374490   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has current primary IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.374815   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find host DHCP lease matching {name: "ha-866665", mac: "52:54:00:96:55:9d", ip: "192.168.39.78"} in network mk-ha-866665
	I0315 06:10:45.447565   25161 main.go:141] libmachine: (ha-866665) DBG | Getting to WaitForSSH function...
	I0315 06:10:45.447590   25161 main.go:141] libmachine: (ha-866665) Reserved static IP address: 192.168.39.78
	I0315 06:10:45.447603   25161 main.go:141] libmachine: (ha-866665) Waiting for SSH to be available...
	I0315 06:10:45.450145   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.450497   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.450531   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.450621   25161 main.go:141] libmachine: (ha-866665) DBG | Using SSH client type: external
	I0315 06:10:45.450650   25161 main.go:141] libmachine: (ha-866665) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa (-rw-------)
	I0315 06:10:45.450677   25161 main.go:141] libmachine: (ha-866665) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:10:45.450686   25161 main.go:141] libmachine: (ha-866665) DBG | About to run SSH command:
	I0315 06:10:45.450698   25161 main.go:141] libmachine: (ha-866665) DBG | exit 0
	I0315 06:10:45.572600   25161 main.go:141] libmachine: (ha-866665) DBG | SSH cmd err, output: <nil>: 
	I0315 06:10:45.572916   25161 main.go:141] libmachine: (ha-866665) KVM machine creation complete!
	I0315 06:10:45.573224   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:45.573796   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:45.573975   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:45.574136   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:10:45.574152   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:10:45.575354   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:10:45.575369   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:10:45.575375   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:10:45.575380   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.577589   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.577839   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.577868   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.578001   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.578154   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.578339   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.578514   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.578725   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.578933   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.578951   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:10:45.675997   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:10:45.676016   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:10:45.676023   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.678790   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.679151   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.679177   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.679280   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.679507   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.679684   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.679843   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.679981   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.680200   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.680214   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:10:45.777471   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:10:45.777553   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:10:45.777564   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:10:45.777573   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:45.777807   25161 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:10:45.777835   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:45.777991   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.780835   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.781144   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.781177   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.781327   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.781526   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.781711   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.781817   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.782015   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.782175   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.782186   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:10:45.894829   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:10:45.894868   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.897660   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.897993   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.898016   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.898172   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.898396   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.898570   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.898748   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.898911   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.899066   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.899095   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:10:46.006028   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:10:46.006060   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:10:46.006079   25161 buildroot.go:174] setting up certificates
	I0315 06:10:46.006091   25161 provision.go:84] configureAuth start
	I0315 06:10:46.006099   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:46.006401   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.008911   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.009300   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.009328   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.009472   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.011698   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.012123   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.012153   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.012399   25161 provision.go:143] copyHostCerts
	I0315 06:10:46.012428   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:10:46.012489   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:10:46.012501   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:10:46.012567   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:10:46.012672   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:10:46.012694   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:10:46.012699   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:10:46.012727   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:10:46.012770   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:10:46.012792   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:10:46.012799   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:10:46.012819   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:10:46.012862   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:10:46.114579   25161 provision.go:177] copyRemoteCerts
	I0315 06:10:46.114641   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:10:46.114669   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.117364   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.117780   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.117809   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.118021   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.118212   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.118390   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.118526   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.199310   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:10:46.199373   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:10:46.224003   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:10:46.224106   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0315 06:10:46.248435   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:10:46.248523   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:10:46.272294   25161 provision.go:87] duration metric: took 266.191988ms to configureAuth
	I0315 06:10:46.272328   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:10:46.272538   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:10:46.272627   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.275562   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.275981   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.276023   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.276163   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.276385   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.276517   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.276701   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.276867   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:46.277048   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:46.277071   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:10:46.538977   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:10:46.539024   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:10:46.539032   25161 main.go:141] libmachine: (ha-866665) Calling .GetURL
	I0315 06:10:46.540356   25161 main.go:141] libmachine: (ha-866665) DBG | Using libvirt version 6000000
	I0315 06:10:46.542333   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.542620   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.542639   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.542807   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:10:46.542826   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:10:46.542833   25161 client.go:171] duration metric: took 24.404778843s to LocalClient.Create
	I0315 06:10:46.542857   25161 start.go:167] duration metric: took 24.404846145s to libmachine.API.Create "ha-866665"
	I0315 06:10:46.542870   25161 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:10:46.542883   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:10:46.542915   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.543138   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:10:46.543163   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.545171   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.545465   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.545497   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.545595   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.545782   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.545957   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.546062   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.623204   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:10:46.627555   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:10:46.627579   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:10:46.627705   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:10:46.627795   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:10:46.627806   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:10:46.627895   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:10:46.638848   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:10:46.666574   25161 start.go:296] duration metric: took 123.69068ms for postStartSetup
	I0315 06:10:46.666628   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:46.667229   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.669803   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.670172   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.670194   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.670420   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:10:46.670631   25161 start.go:128] duration metric: took 24.550442544s to createHost
	I0315 06:10:46.670659   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.672755   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.673063   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.673088   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.673196   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.673370   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.673556   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.673663   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.673817   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:46.674009   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:46.674028   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:10:46.773443   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483046.746376540
	
	I0315 06:10:46.773467   25161 fix.go:216] guest clock: 1710483046.746376540
	I0315 06:10:46.773477   25161 fix.go:229] Guest: 2024-03-15 06:10:46.74637654 +0000 UTC Remote: 2024-03-15 06:10:46.670646135 +0000 UTC m=+24.668914568 (delta=75.730405ms)
	I0315 06:10:46.773518   25161 fix.go:200] guest clock delta is within tolerance: 75.730405ms
	I0315 06:10:46.773527   25161 start.go:83] releasing machines lock for "ha-866665", held for 24.653469865s
	I0315 06:10:46.773549   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.773840   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.776569   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.776912   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.776943   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.777132   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777661   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777840   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777938   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:10:46.777981   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.778075   25161 ssh_runner.go:195] Run: cat /version.json
	I0315 06:10:46.778103   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.780425   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780612   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780828   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.780855   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780963   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.780985   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780996   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.781148   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.781201   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.781295   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.781371   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.781424   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.781502   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.781565   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.853380   25161 ssh_runner.go:195] Run: systemctl --version
	I0315 06:10:46.890714   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:10:47.062319   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:10:47.068972   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:10:47.069031   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:10:47.087360   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:10:47.087388   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:10:47.087454   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:10:47.103753   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:10:47.118832   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:10:47.118898   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:10:47.133344   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:10:47.148065   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:10:47.257782   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:10:47.415025   25161 docker.go:233] disabling docker service ...
	I0315 06:10:47.415117   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:10:47.430257   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:10:47.443144   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:10:47.565290   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:10:47.683033   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:10:47.698205   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:10:47.717813   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:10:47.717877   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.729049   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:10:47.729112   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.739834   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.750874   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.761604   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:10:47.772612   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:10:47.782572   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:10:47.782627   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:10:47.797200   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:10:47.807675   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:10:47.926805   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:10:48.064995   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:10:48.065064   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:10:48.070184   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:10:48.070231   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:10:48.074107   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:10:48.111051   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:10:48.111120   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:10:48.139812   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:10:48.171363   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:10:48.172663   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:48.175331   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:48.175663   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:48.175690   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:48.175866   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:10:48.180029   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:10:48.193238   25161 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:10:48.193374   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:10:48.193425   25161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:10:48.225832   25161 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 06:10:48.225887   25161 ssh_runner.go:195] Run: which lz4
	I0315 06:10:48.229904   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0315 06:10:48.229974   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 06:10:48.234179   25161 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 06:10:48.234210   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 06:10:49.956064   25161 crio.go:444] duration metric: took 1.726111064s to copy over tarball
	I0315 06:10:49.956128   25161 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 06:10:52.358393   25161 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.402235402s)
	I0315 06:10:52.358430   25161 crio.go:451] duration metric: took 2.40234102s to extract the tarball
	I0315 06:10:52.358440   25161 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 06:10:52.402370   25161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:10:52.448534   25161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:10:52.448561   25161 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:10:52.448571   25161 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:10:52.448707   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:10:52.448780   25161 ssh_runner.go:195] Run: crio config
	I0315 06:10:52.493214   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:10:52.493238   25161 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 06:10:52.493249   25161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:10:52.493267   25161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:10:52.493394   25161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:10:52.493424   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:10:52.493481   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:10:52.511497   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:10:52.511618   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:10:52.511684   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:10:52.521808   25161 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:10:52.521872   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:10:52.531706   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:10:52.548963   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:10:52.565745   25161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:10:52.583246   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:10:52.600918   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:10:52.605045   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:10:52.617352   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:10:52.732776   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:10:52.749351   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:10:52.749373   25161 certs.go:194] generating shared ca certs ...
	I0315 06:10:52.749386   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.749522   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:10:52.749561   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:10:52.749569   25161 certs.go:256] generating profile certs ...
	I0315 06:10:52.749625   25161 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:10:52.749639   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt with IP's: []
	I0315 06:10:52.812116   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt ...
	I0315 06:10:52.812142   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt: {Name:mke5907f5cfc66a67f0f76eff96e868fbd1233e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.812324   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key ...
	I0315 06:10:52.812337   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key: {Name:mkdc7da3f09b5ab449f3abedb8f51edf6d84c254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.812415   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926
	I0315 06:10:52.812430   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.254]
	I0315 06:10:52.886122   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 ...
	I0315 06:10:52.886158   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926: {Name:mk2e805aca2504c2638efb9dda22ab0fed9ba051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.886335   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926 ...
	I0315 06:10:52.886351   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926: {Name:mk69e895b0b36226f84d4728c7b95565f24b0bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.886424   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:10:52.886513   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:10:52.886564   25161 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:10:52.886582   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt with IP's: []
	I0315 06:10:53.069389   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt ...
	I0315 06:10:53.069418   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt: {Name:mk3ae531538aaa57a97c1b9779a2bc292afd5f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:53.069560   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key ...
	I0315 06:10:53.069571   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key: {Name:mk39feed49c56fa9080f460282da6bba51dd9975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:53.069646   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:10:53.069663   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:10:53.069672   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:10:53.069691   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:10:53.069703   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:10:53.069713   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:10:53.069725   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:10:53.069735   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:10:53.069785   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:10:53.069818   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:10:53.069832   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:10:53.069855   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:10:53.069876   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:10:53.069901   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:10:53.069940   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:10:53.069965   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.069978   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.069989   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.070515   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:10:53.099857   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:10:53.126915   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:10:53.153257   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:10:53.178995   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 06:10:53.205162   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:10:53.231878   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:10:53.258021   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:10:53.285933   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:10:53.313170   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:10:53.338880   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:10:53.364443   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:10:53.382241   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:10:53.388428   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:10:53.400996   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.405861   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.405912   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.411876   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:10:53.423763   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:10:53.435979   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.440603   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.440657   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.446520   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:10:53.458396   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:10:53.470239   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.475125   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.475179   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.483367   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:10:53.495783   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:10:53.500376   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:10:53.500435   25161 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:10:53.500549   25161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:10:53.500610   25161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:10:53.544607   25161 cri.go:89] found id: ""
	I0315 06:10:53.544672   25161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 06:10:53.558526   25161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 06:10:53.570799   25161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 06:10:53.589500   25161 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 06:10:53.589522   25161 kubeadm.go:156] found existing configuration files:
	
	I0315 06:10:53.589575   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 06:10:53.601642   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 06:10:53.601713   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 06:10:53.614300   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 06:10:53.627314   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 06:10:53.627371   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 06:10:53.639841   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 06:10:53.651949   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 06:10:53.652023   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 06:10:53.662956   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 06:10:53.672956   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 06:10:53.673035   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 06:10:53.683576   25161 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 06:10:53.791497   25161 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 06:10:53.791603   25161 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 06:10:53.926570   25161 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 06:10:53.926725   25161 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 06:10:53.926884   25161 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 06:10:54.140322   25161 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 06:10:54.240759   25161 out.go:204]   - Generating certificates and keys ...
	I0315 06:10:54.240858   25161 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 06:10:54.240936   25161 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 06:10:54.315095   25161 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 06:10:54.736716   25161 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 06:10:54.813228   25161 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 06:10:55.115299   25161 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 06:10:55.224421   25161 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 06:10:55.224597   25161 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-866665 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0315 06:10:55.282784   25161 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 06:10:55.283087   25161 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-866665 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0315 06:10:55.657171   25161 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 06:10:55.822466   25161 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 06:10:56.141839   25161 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 06:10:56.142014   25161 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 06:10:56.343288   25161 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 06:10:56.482472   25161 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 06:10:56.614382   25161 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 06:10:56.813589   25161 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 06:10:56.814099   25161 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 06:10:56.818901   25161 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 06:10:56.820947   25161 out.go:204]   - Booting up control plane ...
	I0315 06:10:56.821044   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 06:10:56.821113   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 06:10:56.821172   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 06:10:56.835970   25161 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 06:10:56.836936   25161 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 06:10:56.836985   25161 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 06:10:56.968185   25161 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 06:11:04.062364   25161 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.098252 seconds
	I0315 06:11:04.062500   25161 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 06:11:04.085094   25161 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 06:11:04.618994   25161 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 06:11:04.619194   25161 kubeadm.go:309] [mark-control-plane] Marking the node ha-866665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 06:11:05.133303   25161 kubeadm.go:309] [bootstrap-token] Using token: kltubs.8avr8euk1lbixl0k
	I0315 06:11:05.134809   25161 out.go:204]   - Configuring RBAC rules ...
	I0315 06:11:05.134931   25161 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 06:11:05.140662   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 06:11:05.148671   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 06:11:05.152686   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 06:11:05.160280   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 06:11:05.164264   25161 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 06:11:05.180896   25161 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 06:11:05.429159   25161 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 06:11:05.547540   25161 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 06:11:05.552787   25161 kubeadm.go:309] 
	I0315 06:11:05.552861   25161 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 06:11:05.552878   25161 kubeadm.go:309] 
	I0315 06:11:05.553011   25161 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 06:11:05.553022   25161 kubeadm.go:309] 
	I0315 06:11:05.553048   25161 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 06:11:05.553156   25161 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 06:11:05.553235   25161 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 06:11:05.553246   25161 kubeadm.go:309] 
	I0315 06:11:05.553318   25161 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 06:11:05.553328   25161 kubeadm.go:309] 
	I0315 06:11:05.553430   25161 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 06:11:05.553448   25161 kubeadm.go:309] 
	I0315 06:11:05.553539   25161 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 06:11:05.553645   25161 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 06:11:05.553766   25161 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 06:11:05.553786   25161 kubeadm.go:309] 
	I0315 06:11:05.553906   25161 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 06:11:05.554025   25161 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 06:11:05.554035   25161 kubeadm.go:309] 
	I0315 06:11:05.554150   25161 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kltubs.8avr8euk1lbixl0k \
	I0315 06:11:05.554260   25161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 06:11:05.554282   25161 kubeadm.go:309] 	--control-plane 
	I0315 06:11:05.554286   25161 kubeadm.go:309] 
	I0315 06:11:05.554353   25161 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 06:11:05.554360   25161 kubeadm.go:309] 
	I0315 06:11:05.554457   25161 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kltubs.8avr8euk1lbixl0k \
	I0315 06:11:05.554581   25161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 06:11:05.563143   25161 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 06:11:05.563178   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:11:05.563194   25161 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 06:11:05.564745   25161 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 06:11:05.565921   25161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 06:11:05.573563   25161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 06:11:05.573581   25161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 06:11:05.666813   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 06:11:06.516825   25161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 06:11:06.516866   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:06.516974   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665 minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=true
	I0315 06:11:06.656186   25161 ops.go:34] apiserver oom_adj: -16
	I0315 06:11:06.656606   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:07.157415   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:07.657228   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:08.157249   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:08.657447   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:09.156869   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:09.656995   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:10.156634   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:10.657577   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:11.156792   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:11.656723   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:12.157097   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:12.657699   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:13.156802   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:13.657548   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:14.157618   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:14.657657   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:15.156920   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:15.657599   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:16.157545   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:16.657334   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.157318   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.657750   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.778194   25161 kubeadm.go:1107] duration metric: took 11.261376371s to wait for elevateKubeSystemPrivileges
	W0315 06:11:17.778241   25161 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 06:11:17.778251   25161 kubeadm.go:393] duration metric: took 24.277818857s to StartCluster
	I0315 06:11:17.778266   25161 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:17.778330   25161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:11:17.778982   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:17.779207   25161 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:17.779227   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:11:17.779215   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 06:11:17.779293   25161 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 06:11:17.779361   25161 addons.go:69] Setting storage-provisioner=true in profile "ha-866665"
	I0315 06:11:17.779385   25161 addons.go:69] Setting default-storageclass=true in profile "ha-866665"
	I0315 06:11:17.779410   25161 addons.go:234] Setting addon storage-provisioner=true in "ha-866665"
	I0315 06:11:17.779421   25161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-866665"
	I0315 06:11:17.779433   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:17.779443   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:17.779833   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.779872   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.780042   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.780106   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.794793   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 06:11:17.795024   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0315 06:11:17.795216   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.795393   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.795754   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.795777   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.795872   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.795911   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.796136   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.796213   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.796362   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.796717   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.796759   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.798343   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:11:17.798565   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 06:11:17.798962   25161 cert_rotation.go:137] Starting client certificate rotation controller
	I0315 06:11:17.799201   25161 addons.go:234] Setting addon default-storageclass=true in "ha-866665"
	I0315 06:11:17.799236   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:17.799586   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.799630   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.812556   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0315 06:11:17.813066   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.813620   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.813642   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.814018   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.814195   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.815112   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0315 06:11:17.815506   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.816010   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.816032   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.816143   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:17.816359   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.818651   25161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:11:17.816931   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.820268   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.820379   25161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:11:17.820397   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 06:11:17.820416   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:17.823295   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.823681   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:17.823747   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.823855   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:17.824034   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:17.824173   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:17.824333   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:17.835187   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0315 06:11:17.835566   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.835986   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.836016   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.836398   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.836581   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.838121   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:17.838341   25161 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 06:11:17.838352   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 06:11:17.838364   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:17.840844   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.841296   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:17.841319   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.841427   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:17.841599   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:17.841756   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:17.841873   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:18.004520   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 06:11:18.033029   25161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 06:11:18.056028   25161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:11:18.830505   25161 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 06:11:18.830594   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.830618   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.830888   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.830908   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.830948   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.830960   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.830970   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.831187   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.831205   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.831214   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.831316   25161 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0315 06:11:18.831324   25161 round_trippers.go:469] Request Headers:
	I0315 06:11:18.831334   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:11:18.831339   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:11:18.842577   25161 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0315 06:11:18.843182   25161 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0315 06:11:18.843197   25161 round_trippers.go:469] Request Headers:
	I0315 06:11:18.843208   25161 round_trippers.go:473]     Content-Type: application/json
	I0315 06:11:18.843214   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:11:18.843219   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:11:18.846442   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:11:18.846656   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.846674   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.846920   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.846941   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.846962   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.969602   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.969629   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.969936   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.969954   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.969964   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.969972   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.970192   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.970204   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.972182   25161 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0315 06:11:18.973516   25161 addons.go:505] duration metric: took 1.194224795s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0315 06:11:18.973563   25161 start.go:245] waiting for cluster config update ...
	I0315 06:11:18.973582   25161 start.go:254] writing updated cluster config ...
	I0315 06:11:18.975206   25161 out.go:177] 
	I0315 06:11:18.976662   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:18.976735   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:18.978300   25161 out.go:177] * Starting "ha-866665-m02" control-plane node in "ha-866665" cluster
	I0315 06:11:18.979766   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:11:18.979803   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:11:18.979917   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:11:18.979932   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:11:18.980000   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:18.980245   25161 start.go:360] acquireMachinesLock for ha-866665-m02: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:11:18.980293   25161 start.go:364] duration metric: took 27.2µs to acquireMachinesLock for "ha-866665-m02"
	I0315 06:11:18.980316   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:18.980411   25161 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0315 06:11:18.982711   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:11:18.982794   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:18.982826   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:18.997393   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0315 06:11:18.997850   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:18.998314   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:18.998335   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:18.998666   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:18.998819   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:18.998972   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:18.999185   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:11:18.999209   25161 client.go:168] LocalClient.Create starting
	I0315 06:11:18.999242   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:11:18.999284   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:11:18.999300   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:11:18.999342   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:11:18.999360   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:11:18.999371   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:11:18.999385   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:11:18.999393   25161 main.go:141] libmachine: (ha-866665-m02) Calling .PreCreateCheck
	I0315 06:11:18.999563   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:18.999936   25161 main.go:141] libmachine: Creating machine...
	I0315 06:11:18.999949   25161 main.go:141] libmachine: (ha-866665-m02) Calling .Create
	I0315 06:11:19.000111   25161 main.go:141] libmachine: (ha-866665-m02) Creating KVM machine...
	I0315 06:11:19.001375   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found existing default KVM network
	I0315 06:11:19.001562   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found existing private KVM network mk-ha-866665
	I0315 06:11:19.001732   25161 main.go:141] libmachine: (ha-866665-m02) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 ...
	I0315 06:11:19.001756   25161 main.go:141] libmachine: (ha-866665-m02) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:11:19.001804   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.001716   25510 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:11:19.001913   25161 main.go:141] libmachine: (ha-866665-m02) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:11:19.212214   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.212055   25510 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa...
	I0315 06:11:19.452618   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.452478   25510 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/ha-866665-m02.rawdisk...
	I0315 06:11:19.452640   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Writing magic tar header
	I0315 06:11:19.452650   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Writing SSH key tar header
	I0315 06:11:19.452658   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.452622   25510 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 ...
	I0315 06:11:19.452745   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02
	I0315 06:11:19.452762   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:11:19.452807   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 (perms=drwx------)
	I0315 06:11:19.452837   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:11:19.452848   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:11:19.452866   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:11:19.452879   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:11:19.452889   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:11:19.452900   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home
	I0315 06:11:19.452915   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Skipping /home - not owner
	I0315 06:11:19.452944   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:11:19.452981   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:11:19.452997   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:11:19.453009   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:11:19.453019   25161 main.go:141] libmachine: (ha-866665-m02) Creating domain...
	I0315 06:11:19.454021   25161 main.go:141] libmachine: (ha-866665-m02) define libvirt domain using xml: 
	I0315 06:11:19.454038   25161 main.go:141] libmachine: (ha-866665-m02) <domain type='kvm'>
	I0315 06:11:19.454062   25161 main.go:141] libmachine: (ha-866665-m02)   <name>ha-866665-m02</name>
	I0315 06:11:19.454072   25161 main.go:141] libmachine: (ha-866665-m02)   <memory unit='MiB'>2200</memory>
	I0315 06:11:19.454097   25161 main.go:141] libmachine: (ha-866665-m02)   <vcpu>2</vcpu>
	I0315 06:11:19.454114   25161 main.go:141] libmachine: (ha-866665-m02)   <features>
	I0315 06:11:19.454127   25161 main.go:141] libmachine: (ha-866665-m02)     <acpi/>
	I0315 06:11:19.454138   25161 main.go:141] libmachine: (ha-866665-m02)     <apic/>
	I0315 06:11:19.454150   25161 main.go:141] libmachine: (ha-866665-m02)     <pae/>
	I0315 06:11:19.454159   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454178   25161 main.go:141] libmachine: (ha-866665-m02)   </features>
	I0315 06:11:19.454190   25161 main.go:141] libmachine: (ha-866665-m02)   <cpu mode='host-passthrough'>
	I0315 06:11:19.454202   25161 main.go:141] libmachine: (ha-866665-m02)   
	I0315 06:11:19.454217   25161 main.go:141] libmachine: (ha-866665-m02)   </cpu>
	I0315 06:11:19.454230   25161 main.go:141] libmachine: (ha-866665-m02)   <os>
	I0315 06:11:19.454241   25161 main.go:141] libmachine: (ha-866665-m02)     <type>hvm</type>
	I0315 06:11:19.454252   25161 main.go:141] libmachine: (ha-866665-m02)     <boot dev='cdrom'/>
	I0315 06:11:19.454263   25161 main.go:141] libmachine: (ha-866665-m02)     <boot dev='hd'/>
	I0315 06:11:19.454274   25161 main.go:141] libmachine: (ha-866665-m02)     <bootmenu enable='no'/>
	I0315 06:11:19.454290   25161 main.go:141] libmachine: (ha-866665-m02)   </os>
	I0315 06:11:19.454302   25161 main.go:141] libmachine: (ha-866665-m02)   <devices>
	I0315 06:11:19.454314   25161 main.go:141] libmachine: (ha-866665-m02)     <disk type='file' device='cdrom'>
	I0315 06:11:19.454332   25161 main.go:141] libmachine: (ha-866665-m02)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/boot2docker.iso'/>
	I0315 06:11:19.454343   25161 main.go:141] libmachine: (ha-866665-m02)       <target dev='hdc' bus='scsi'/>
	I0315 06:11:19.454372   25161 main.go:141] libmachine: (ha-866665-m02)       <readonly/>
	I0315 06:11:19.454389   25161 main.go:141] libmachine: (ha-866665-m02)     </disk>
	I0315 06:11:19.454397   25161 main.go:141] libmachine: (ha-866665-m02)     <disk type='file' device='disk'>
	I0315 06:11:19.454409   25161 main.go:141] libmachine: (ha-866665-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:11:19.454444   25161 main.go:141] libmachine: (ha-866665-m02)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/ha-866665-m02.rawdisk'/>
	I0315 06:11:19.454468   25161 main.go:141] libmachine: (ha-866665-m02)       <target dev='hda' bus='virtio'/>
	I0315 06:11:19.454484   25161 main.go:141] libmachine: (ha-866665-m02)     </disk>
	I0315 06:11:19.454501   25161 main.go:141] libmachine: (ha-866665-m02)     <interface type='network'>
	I0315 06:11:19.454515   25161 main.go:141] libmachine: (ha-866665-m02)       <source network='mk-ha-866665'/>
	I0315 06:11:19.454526   25161 main.go:141] libmachine: (ha-866665-m02)       <model type='virtio'/>
	I0315 06:11:19.454539   25161 main.go:141] libmachine: (ha-866665-m02)     </interface>
	I0315 06:11:19.454550   25161 main.go:141] libmachine: (ha-866665-m02)     <interface type='network'>
	I0315 06:11:19.454558   25161 main.go:141] libmachine: (ha-866665-m02)       <source network='default'/>
	I0315 06:11:19.454570   25161 main.go:141] libmachine: (ha-866665-m02)       <model type='virtio'/>
	I0315 06:11:19.454579   25161 main.go:141] libmachine: (ha-866665-m02)     </interface>
	I0315 06:11:19.454590   25161 main.go:141] libmachine: (ha-866665-m02)     <serial type='pty'>
	I0315 06:11:19.454608   25161 main.go:141] libmachine: (ha-866665-m02)       <target port='0'/>
	I0315 06:11:19.454623   25161 main.go:141] libmachine: (ha-866665-m02)     </serial>
	I0315 06:11:19.454637   25161 main.go:141] libmachine: (ha-866665-m02)     <console type='pty'>
	I0315 06:11:19.454656   25161 main.go:141] libmachine: (ha-866665-m02)       <target type='serial' port='0'/>
	I0315 06:11:19.454697   25161 main.go:141] libmachine: (ha-866665-m02)     </console>
	I0315 06:11:19.454721   25161 main.go:141] libmachine: (ha-866665-m02)     <rng model='virtio'>
	I0315 06:11:19.454741   25161 main.go:141] libmachine: (ha-866665-m02)       <backend model='random'>/dev/random</backend>
	I0315 06:11:19.454753   25161 main.go:141] libmachine: (ha-866665-m02)     </rng>
	I0315 06:11:19.454763   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454773   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454782   25161 main.go:141] libmachine: (ha-866665-m02)   </devices>
	I0315 06:11:19.454803   25161 main.go:141] libmachine: (ha-866665-m02) </domain>
	I0315 06:11:19.454817   25161 main.go:141] libmachine: (ha-866665-m02) 
	I0315 06:11:19.461775   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:2c:8a:b0 in network default
	I0315 06:11:19.462320   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring networks are active...
	I0315 06:11:19.462341   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:19.463146   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring network default is active
	I0315 06:11:19.463477   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring network mk-ha-866665 is active
	I0315 06:11:19.463904   25161 main.go:141] libmachine: (ha-866665-m02) Getting domain xml...
	I0315 06:11:19.464636   25161 main.go:141] libmachine: (ha-866665-m02) Creating domain...
	I0315 06:11:20.671961   25161 main.go:141] libmachine: (ha-866665-m02) Waiting to get IP...
	I0315 06:11:20.672880   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:20.673319   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:20.673382   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:20.673313   25510 retry.go:31] will retry after 238.477447ms: waiting for machine to come up
	I0315 06:11:20.913926   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:20.914405   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:20.914428   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:20.914373   25510 retry.go:31] will retry after 314.77947ms: waiting for machine to come up
	I0315 06:11:21.230707   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:21.231215   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:21.231255   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:21.231181   25510 retry.go:31] will retry after 448.854491ms: waiting for machine to come up
	I0315 06:11:21.681861   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:21.682358   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:21.682388   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:21.682292   25510 retry.go:31] will retry after 371.773993ms: waiting for machine to come up
	I0315 06:11:22.055701   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:22.056084   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:22.056115   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:22.056007   25510 retry.go:31] will retry after 740.031821ms: waiting for machine to come up
	I0315 06:11:22.797893   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:22.798351   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:22.798402   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:22.798305   25510 retry.go:31] will retry after 599.3896ms: waiting for machine to come up
	I0315 06:11:23.399029   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:23.399566   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:23.399590   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:23.399509   25510 retry.go:31] will retry after 1.146745032s: waiting for machine to come up
	I0315 06:11:24.548189   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:24.548620   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:24.548644   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:24.548518   25510 retry.go:31] will retry after 1.283100132s: waiting for machine to come up
	I0315 06:11:25.833853   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:25.834293   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:25.834322   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:25.834252   25510 retry.go:31] will retry after 1.779659298s: waiting for machine to come up
	I0315 06:11:27.616200   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:27.616664   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:27.616690   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:27.616626   25510 retry.go:31] will retry after 1.75877657s: waiting for machine to come up
	I0315 06:11:29.376614   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:29.377098   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:29.377123   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:29.377056   25510 retry.go:31] will retry after 2.667490999s: waiting for machine to come up
	I0315 06:11:32.046591   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:32.046965   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:32.046991   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:32.046928   25510 retry.go:31] will retry after 3.546712049s: waiting for machine to come up
	I0315 06:11:35.595780   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:35.596299   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:35.596323   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:35.596248   25510 retry.go:31] will retry after 3.690333447s: waiting for machine to come up
	I0315 06:11:39.287776   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:39.288235   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:39.288263   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:39.288190   25510 retry.go:31] will retry after 5.596711816s: waiting for machine to come up
	I0315 06:11:44.886163   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.886584   25161 main.go:141] libmachine: (ha-866665-m02) Found IP for machine: 192.168.39.27
	I0315 06:11:44.886607   25161 main.go:141] libmachine: (ha-866665-m02) Reserving static IP address...
	I0315 06:11:44.886619   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.887066   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find host DHCP lease matching {name: "ha-866665-m02", mac: "52:54:00:fa:e0:d5", ip: "192.168.39.27"} in network mk-ha-866665
	I0315 06:11:44.960481   25161 main.go:141] libmachine: (ha-866665-m02) Reserved static IP address: 192.168.39.27
	I0315 06:11:44.960506   25161 main.go:141] libmachine: (ha-866665-m02) Waiting for SSH to be available...
	I0315 06:11:44.960552   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Getting to WaitForSSH function...
	I0315 06:11:44.962954   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.963264   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:44.963296   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.963451   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using SSH client type: external
	I0315 06:11:44.963479   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa (-rw-------)
	I0315 06:11:44.963517   25161 main.go:141] libmachine: (ha-866665-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:11:44.963538   25161 main.go:141] libmachine: (ha-866665-m02) DBG | About to run SSH command:
	I0315 06:11:44.963555   25161 main.go:141] libmachine: (ha-866665-m02) DBG | exit 0
	I0315 06:11:45.093093   25161 main.go:141] libmachine: (ha-866665-m02) DBG | SSH cmd err, output: <nil>: 
	I0315 06:11:45.093395   25161 main.go:141] libmachine: (ha-866665-m02) KVM machine creation complete!
	I0315 06:11:45.093717   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:45.094288   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:45.094511   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:45.094683   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:11:45.094697   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:11:45.096173   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:11:45.096188   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:11:45.096194   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:11:45.096199   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.098422   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.098859   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.098892   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.099003   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.099170   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.099336   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.099498   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.099660   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.099916   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.099932   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:11:45.208077   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:11:45.208104   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:11:45.208115   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.211003   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.211441   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.211472   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.211761   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.211963   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.212138   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.212291   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.212491   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.212649   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.212672   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:11:45.325589   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:11:45.325673   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:11:45.325686   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:11:45.325701   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.325978   25161 buildroot.go:166] provisioning hostname "ha-866665-m02"
	I0315 06:11:45.326014   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.326192   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.328903   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.329329   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.329357   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.329487   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.329661   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.329814   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.329939   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.330097   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.330278   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.330294   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665-m02 && echo "ha-866665-m02" | sudo tee /etc/hostname
	I0315 06:11:45.451730   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665-m02
	
	I0315 06:11:45.451779   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.454743   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.455063   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.455088   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.455261   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.455462   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.455626   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.455751   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.455918   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.456074   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.456090   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:11:45.574451   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:11:45.574505   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:11:45.574529   25161 buildroot.go:174] setting up certificates
	I0315 06:11:45.574544   25161 provision.go:84] configureAuth start
	I0315 06:11:45.574564   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.574872   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:45.577470   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.577872   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.577888   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.578042   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.580303   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.580661   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.580694   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.580845   25161 provision.go:143] copyHostCerts
	I0315 06:11:45.580875   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:11:45.580917   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:11:45.580928   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:11:45.581068   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:11:45.581189   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:11:45.581214   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:11:45.581221   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:11:45.581259   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:11:45.581357   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:11:45.581381   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:11:45.581386   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:11:45.581418   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:11:45.581497   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665-m02 san=[127.0.0.1 192.168.39.27 ha-866665-m02 localhost minikube]
	I0315 06:11:45.989846   25161 provision.go:177] copyRemoteCerts
	I0315 06:11:45.989902   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:11:45.989924   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.992909   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.993324   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.993356   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.993555   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.993777   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.993938   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.994060   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.081396   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:11:46.081473   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:11:46.109864   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:11:46.109938   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 06:11:46.137707   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:11:46.137790   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:11:46.164843   25161 provision.go:87] duration metric: took 590.282213ms to configureAuth
	I0315 06:11:46.164875   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:11:46.165037   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:46.165114   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.168318   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.168773   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.168796   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.169008   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.169194   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.169349   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.169468   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.169652   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:46.169818   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:46.169834   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:11:46.453910   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:11:46.453940   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:11:46.453950   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetURL
	I0315 06:11:46.455357   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using libvirt version 6000000
	I0315 06:11:46.458465   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.458944   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.458970   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.459139   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:11:46.459162   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:11:46.459170   25161 client.go:171] duration metric: took 27.459953429s to LocalClient.Create
	I0315 06:11:46.459197   25161 start.go:167] duration metric: took 27.460010575s to libmachine.API.Create "ha-866665"
	I0315 06:11:46.459209   25161 start.go:293] postStartSetup for "ha-866665-m02" (driver="kvm2")
	I0315 06:11:46.459224   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:11:46.459279   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.459554   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:11:46.459580   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.461984   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.462358   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.462377   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.462538   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.462718   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.462841   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.462983   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.549717   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:11:46.554606   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:11:46.554634   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:11:46.554712   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:11:46.554797   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:11:46.554808   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:11:46.554915   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:11:46.565688   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:11:46.592998   25161 start.go:296] duration metric: took 133.773575ms for postStartSetup
	I0315 06:11:46.593055   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:46.593615   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:46.596277   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.596611   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.596638   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.596890   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:46.597078   25161 start.go:128] duration metric: took 27.61665701s to createHost
	I0315 06:11:46.597110   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.599568   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.599955   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.599992   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.600096   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.600293   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.600482   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.600663   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.600821   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:46.601009   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:46.601023   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:11:46.709895   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483106.697596744
	
	I0315 06:11:46.709924   25161 fix.go:216] guest clock: 1710483106.697596744
	I0315 06:11:46.709934   25161 fix.go:229] Guest: 2024-03-15 06:11:46.697596744 +0000 UTC Remote: 2024-03-15 06:11:46.597092984 +0000 UTC m=+84.595361407 (delta=100.50376ms)
	I0315 06:11:46.709953   25161 fix.go:200] guest clock delta is within tolerance: 100.50376ms
	I0315 06:11:46.709960   25161 start.go:83] releasing machines lock for "ha-866665-m02", held for 27.7296545s
	I0315 06:11:46.709986   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.710286   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:46.713347   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.713749   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.713778   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.715805   25161 out.go:177] * Found network options:
	I0315 06:11:46.717132   25161 out.go:177]   - NO_PROXY=192.168.39.78
	W0315 06:11:46.718565   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:11:46.718627   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719172   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719355   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719441   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:11:46.719478   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	W0315 06:11:46.719563   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:11:46.719627   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:11:46.719648   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.722207   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722315   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722595   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.722671   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722705   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.722726   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722741   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.722924   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.723022   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.723085   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.723217   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.723221   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.723342   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.723456   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.962454   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:11:46.969974   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:11:46.970051   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:11:46.986944   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:11:46.986965   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:11:46.987024   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:11:47.005987   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:11:47.023015   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:11:47.023085   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:11:47.039088   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:11:47.055005   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:11:47.175129   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:11:47.329335   25161 docker.go:233] disabling docker service ...
	I0315 06:11:47.329416   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:11:47.345111   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:11:47.358569   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:11:47.495710   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:11:47.619051   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:11:47.633625   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:11:47.653527   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:11:47.653600   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.664914   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:11:47.664985   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.675987   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.688607   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.699887   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:11:47.712058   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:11:47.722345   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:11:47.722393   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:11:47.735456   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:11:47.746113   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:11:47.859069   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:11:48.009681   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:11:48.009775   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:11:48.015225   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:11:48.015290   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:11:48.019748   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:11:48.061885   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:11:48.061977   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:11:48.096436   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:11:48.127478   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:11:48.128915   25161 out.go:177]   - env NO_PROXY=192.168.39.78
	I0315 06:11:48.130076   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:48.132961   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:48.133395   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:48.133425   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:48.133753   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:11:48.138360   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:11:48.152751   25161 mustload.go:65] Loading cluster: ha-866665
	I0315 06:11:48.152991   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:48.153287   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:48.153315   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:48.168153   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0315 06:11:48.168705   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:48.169170   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:48.169191   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:48.169512   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:48.169723   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:48.171126   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:48.171526   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:48.171550   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:48.185533   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0315 06:11:48.185946   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:48.186369   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:48.186389   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:48.186692   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:48.186873   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:48.187131   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.27
	I0315 06:11:48.187151   25161 certs.go:194] generating shared ca certs ...
	I0315 06:11:48.187169   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.187316   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:11:48.187375   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:11:48.187390   25161 certs.go:256] generating profile certs ...
	I0315 06:11:48.187530   25161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:11:48.187561   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f
	I0315 06:11:48.187573   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:11:48.439901   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f ...
	I0315 06:11:48.439953   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f: {Name:mk4b26567136aa6ff7ab4bb617e00cc8478d0fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.440346   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f ...
	I0315 06:11:48.440362   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f: {Name:mk33e05d1d83753c9e7ce4362d742df9a7045182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.440489   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:11:48.440665   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:11:48.440836   25161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:11:48.440854   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:11:48.440872   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:11:48.440892   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:11:48.440909   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:11:48.440925   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:11:48.440942   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:11:48.440959   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:11:48.440977   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:11:48.441046   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:11:48.441092   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:11:48.441101   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:11:48.441131   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:11:48.441160   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:11:48.441192   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:11:48.441246   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:11:48.441287   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:11:48.441308   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:48.441326   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:11:48.441361   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:48.444608   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:48.445108   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:48.445136   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:48.445313   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:48.445527   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:48.445667   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:48.445814   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:48.516883   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 06:11:48.522537   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 06:11:48.534653   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 06:11:48.539108   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 06:11:48.550662   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 06:11:48.556214   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 06:11:48.567264   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 06:11:48.571559   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0315 06:11:48.582101   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 06:11:48.586153   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 06:11:48.596016   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 06:11:48.599838   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0315 06:11:48.609654   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:11:48.636199   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:11:48.661419   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:11:48.687348   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:11:48.715380   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 06:11:48.740315   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:11:48.765710   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:11:48.793180   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:11:48.818824   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:11:48.843675   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:11:48.867791   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:11:48.892538   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 06:11:48.910145   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 06:11:48.927330   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 06:11:48.944720   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0315 06:11:48.962302   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 06:11:48.981248   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0315 06:11:49.000223   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 06:11:49.020279   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:11:49.026448   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:11:49.039683   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.044357   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.044408   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.050433   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:11:49.064150   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:11:49.077966   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.083512   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.083575   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.089653   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:11:49.102055   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:11:49.114119   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.118843   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.118901   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.124809   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:11:49.136983   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:11:49.141295   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:11:49.141350   25161 kubeadm.go:928] updating node {m02 192.168.39.27 8443 v1.28.4 crio true true} ...
	I0315 06:11:49.141446   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:11:49.141470   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:11:49.141497   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:11:49.160734   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:11:49.160794   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:11:49.160844   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:11:49.171655   25161 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 06:11:49.171703   25161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 06:11:49.182048   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 06:11:49.182079   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:11:49.182157   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:11:49.182203   25161 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0315 06:11:49.182159   25161 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0315 06:11:49.187331   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 06:11:49.187360   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 06:11:50.311616   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:11:50.329163   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:11:50.329314   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:11:50.334183   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 06:11:50.334229   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 06:11:56.954032   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:11:56.954128   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:11:56.959313   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 06:11:56.959348   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 06:11:57.207604   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 06:11:57.218204   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 06:11:57.235730   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:11:57.252913   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:11:57.270062   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:11:57.274487   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:11:57.286677   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:11:57.426308   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:11:57.444974   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:57.445449   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:57.445488   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:57.460080   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0315 06:11:57.460532   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:57.460957   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:57.460974   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:57.461376   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:57.461625   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:57.461800   25161 start.go:316] joinCluster: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:11:57.461917   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 06:11:57.461935   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:57.464992   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:57.465490   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:57.465517   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:57.465709   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:57.465895   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:57.466114   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:57.466266   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:57.635488   25161 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:57.635545   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d82o6a.5k3xjxfj0ny7by1z --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I0315 06:12:37.437149   25161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d82o6a.5k3xjxfj0ny7by1z --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (39.801581492s)
	I0315 06:12:37.437183   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 06:12:37.893523   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m02 minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false
	I0315 06:12:38.000064   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-866665-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 06:12:38.117574   25161 start.go:318] duration metric: took 40.655767484s to joinCluster
	I0315 06:12:38.117651   25161 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:12:38.119216   25161 out.go:177] * Verifying Kubernetes components...
	I0315 06:12:38.117888   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:12:38.120439   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:12:38.282643   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:12:38.299969   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:12:38.300252   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 06:12:38.300331   25161 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.78:8443
	I0315 06:12:38.300534   25161 node_ready.go:35] waiting up to 6m0s for node "ha-866665-m02" to be "Ready" ...
	I0315 06:12:38.300616   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:38.300624   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:38.300631   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:38.300635   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:38.310858   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:12:38.801454   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:38.801482   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:38.801493   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:38.801498   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:38.805250   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:39.301423   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:39.301446   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:39.301459   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:39.301465   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:39.305185   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:39.801156   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:39.801178   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:39.801185   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:39.801190   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:39.805670   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:40.301692   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:40.301714   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:40.301726   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:40.301732   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:40.305762   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:40.306565   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:40.801762   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:40.801785   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:40.801796   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:40.801800   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:40.807075   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:41.301728   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:41.301749   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:41.301757   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:41.301761   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:41.305174   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:41.801238   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:41.801267   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:41.801278   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:41.801284   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:41.804969   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:42.300804   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:42.300824   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:42.300831   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:42.300836   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:42.305441   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:42.306636   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:42.801494   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:42.801526   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:42.801533   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:42.801537   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:42.805306   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:43.301268   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:43.301289   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:43.301297   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:43.301301   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:43.305499   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:43.801380   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:43.801400   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:43.801408   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:43.801419   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:43.806326   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.301704   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:44.301727   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:44.301735   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:44.301741   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:44.305934   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.801016   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:44.801040   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:44.801047   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:44.801052   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:44.805913   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.806538   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:45.301701   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:45.301777   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:45.301793   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:45.301806   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:45.307725   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:45.801737   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:45.801759   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:45.801770   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:45.801776   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:45.807657   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:46.301709   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:46.301733   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:46.301742   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:46.301748   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:46.308832   25161 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0315 06:12:46.800901   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:46.800930   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:46.800953   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:46.800962   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:46.804590   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.301026   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:47.301061   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.301074   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.301084   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.304330   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.305259   25161 node_ready.go:49] node "ha-866665-m02" has status "Ready":"True"
	I0315 06:12:47.305282   25161 node_ready.go:38] duration metric: took 9.004730208s for node "ha-866665-m02" to be "Ready" ...
	I0315 06:12:47.305294   25161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:12:47.305371   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:47.305385   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.305396   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.305403   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.311117   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:47.317728   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.317807   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mgthb
	I0315 06:12:47.317820   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.317829   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.317836   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.320636   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.321240   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.321255   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.321262   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.321265   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.323898   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.324532   25161 pod_ready.go:92] pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.324548   25161 pod_ready.go:81] duration metric: took 6.79959ms for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.324556   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.324600   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r57px
	I0315 06:12:47.324607   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.324614   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.324619   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.327370   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.328092   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.328108   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.328117   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.328122   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.330755   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.331524   25161 pod_ready.go:92] pod "coredns-5dd5756b68-r57px" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.331539   25161 pod_ready.go:81] duration metric: took 6.977272ms for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.331546   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.331600   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665
	I0315 06:12:47.331612   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.331620   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.331625   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.334533   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.335071   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.335082   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.335087   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.335091   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.337345   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.337873   25161 pod_ready.go:92] pod "etcd-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.337885   25161 pod_ready.go:81] duration metric: took 6.334392ms for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.337892   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.337928   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m02
	I0315 06:12:47.337935   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.337942   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.337946   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.340522   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.341110   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:47.341123   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.341131   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.341136   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.344723   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.345392   25161 pod_ready.go:92] pod "etcd-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.345404   25161 pod_ready.go:81] duration metric: took 7.506484ms for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.345416   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.502029   25161 request.go:629] Waited for 156.551918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:12:47.502079   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:12:47.502086   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.502096   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.502105   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.505512   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.701509   25161 request.go:629] Waited for 195.358809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.701574   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.701586   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.701597   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.701605   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.705391   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.705935   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.705952   25161 pod_ready.go:81] duration metric: took 360.530863ms for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.705962   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.901162   25161 request.go:629] Waited for 195.120234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:12:47.901229   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:12:47.901233   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.901240   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.901243   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.904715   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.101650   25161 request.go:629] Waited for 196.22571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.101726   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.101733   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.101744   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.101759   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.105495   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.105945   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.105966   25161 pod_ready.go:81] duration metric: took 399.998423ms for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.105975   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.302123   25161 request.go:629] Waited for 196.080349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:12:48.302232   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:12:48.302243   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.302250   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.302254   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.306075   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.501855   25161 request.go:629] Waited for 195.154281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:48.501923   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:48.501928   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.501936   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.501942   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.506180   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:48.506886   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.506904   25161 pod_ready.go:81] duration metric: took 400.923624ms for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.506914   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.702021   25161 request.go:629] Waited for 195.031498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:12:48.702078   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:12:48.702083   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.702091   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.702095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.705692   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.901653   25161 request.go:629] Waited for 195.17366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.901712   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.901718   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.901726   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.901729   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.905124   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.905675   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.905693   25161 pod_ready.go:81] duration metric: took 398.773812ms for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.905702   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.101312   25161 request.go:629] Waited for 195.556427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:12:49.101369   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:12:49.101374   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.101381   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.101384   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.105639   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:49.301905   25161 request.go:629] Waited for 195.292907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:49.301953   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:49.301958   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.301966   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.301970   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.305525   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.306205   25161 pod_ready.go:92] pod "kube-proxy-lqzk8" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:49.306233   25161 pod_ready.go:81] duration metric: took 400.522917ms for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.306245   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.501418   25161 request.go:629] Waited for 195.105502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:12:49.501493   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:12:49.501506   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.501517   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.501527   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.505178   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.701493   25161 request.go:629] Waited for 195.378076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:49.701573   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:49.701581   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.701592   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.701596   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.705281   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.705957   25161 pod_ready.go:92] pod "kube-proxy-sbxgg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:49.705978   25161 pod_ready.go:81] duration metric: took 399.7239ms for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.705991   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.901028   25161 request.go:629] Waited for 194.979548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:12:49.901083   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:12:49.901094   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.901113   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.901124   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.904875   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.101657   25161 request.go:629] Waited for 196.275103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:50.101737   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:50.101745   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.101755   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.101771   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.105770   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.106333   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:50.106352   25161 pod_ready.go:81] duration metric: took 400.352693ms for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.106365   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.301527   25161 request.go:629] Waited for 195.083975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:12:50.301585   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:12:50.301590   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.301597   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.301601   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.305765   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:50.501889   25161 request.go:629] Waited for 195.380466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:50.501943   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:50.501950   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.501957   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.501968   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.508595   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:12:50.510198   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:50.510218   25161 pod_ready.go:81] duration metric: took 403.844299ms for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.510228   25161 pod_ready.go:38] duration metric: took 3.204921641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:12:50.510243   25161 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:12:50.510297   25161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:12:50.525226   25161 api_server.go:72] duration metric: took 12.407537134s to wait for apiserver process to appear ...
	I0315 06:12:50.525257   25161 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:12:50.525278   25161 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0315 06:12:50.531827   25161 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0315 06:12:50.531886   25161 round_trippers.go:463] GET https://192.168.39.78:8443/version
	I0315 06:12:50.531891   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.531899   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.531904   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.533184   25161 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 06:12:50.533279   25161 api_server.go:141] control plane version: v1.28.4
	I0315 06:12:50.533300   25161 api_server.go:131] duration metric: took 8.036289ms to wait for apiserver health ...
	I0315 06:12:50.533307   25161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:12:50.701639   25161 request.go:629] Waited for 168.269401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:50.701695   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:50.701702   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.701712   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.701721   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.707893   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:12:50.712112   25161 system_pods.go:59] 17 kube-system pods found
	I0315 06:12:50.712143   25161 system_pods.go:61] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:12:50.712149   25161 system_pods.go:61] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:12:50.712154   25161 system_pods.go:61] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:12:50.712159   25161 system_pods.go:61] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:12:50.712163   25161 system_pods.go:61] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:12:50.712168   25161 system_pods.go:61] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:12:50.712173   25161 system_pods.go:61] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:12:50.712178   25161 system_pods.go:61] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:12:50.712183   25161 system_pods.go:61] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:12:50.712189   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:12:50.712197   25161 system_pods.go:61] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:12:50.712203   25161 system_pods.go:61] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:12:50.712212   25161 system_pods.go:61] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:12:50.712217   25161 system_pods.go:61] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:12:50.712225   25161 system_pods.go:61] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:12:50.712229   25161 system_pods.go:61] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:12:50.712233   25161 system_pods.go:61] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:12:50.712241   25161 system_pods.go:74] duration metric: took 178.928299ms to wait for pod list to return data ...
	I0315 06:12:50.712257   25161 default_sa.go:34] waiting for default service account to be created ...
	I0315 06:12:50.901688   25161 request.go:629] Waited for 189.357264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:12:50.901760   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:12:50.901767   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.901774   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.901779   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.905542   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.905747   25161 default_sa.go:45] found service account: "default"
	I0315 06:12:50.905766   25161 default_sa.go:55] duration metric: took 193.501058ms for default service account to be created ...
	I0315 06:12:50.905776   25161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 06:12:51.101142   25161 request.go:629] Waited for 195.290804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:51.101193   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:51.101200   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:51.101209   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:51.101218   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:51.106594   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:51.111129   25161 system_pods.go:86] 17 kube-system pods found
	I0315 06:12:51.111156   25161 system_pods.go:89] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:12:51.111163   25161 system_pods.go:89] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:12:51.111169   25161 system_pods.go:89] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:12:51.111175   25161 system_pods.go:89] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:12:51.111181   25161 system_pods.go:89] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:12:51.111187   25161 system_pods.go:89] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:12:51.111193   25161 system_pods.go:89] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:12:51.111200   25161 system_pods.go:89] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:12:51.111206   25161 system_pods.go:89] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:12:51.111220   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:12:51.111236   25161 system_pods.go:89] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:12:51.111245   25161 system_pods.go:89] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:12:51.111253   25161 system_pods.go:89] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:12:51.111262   25161 system_pods.go:89] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:12:51.111269   25161 system_pods.go:89] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:12:51.111279   25161 system_pods.go:89] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:12:51.111285   25161 system_pods.go:89] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:12:51.111297   25161 system_pods.go:126] duration metric: took 205.514134ms to wait for k8s-apps to be running ...
	I0315 06:12:51.111311   25161 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 06:12:51.111363   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:12:51.128999   25161 system_svc.go:56] duration metric: took 17.683933ms WaitForService to wait for kubelet
	I0315 06:12:51.129024   25161 kubeadm.go:576] duration metric: took 13.01133885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:12:51.129040   25161 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:12:51.301469   25161 request.go:629] Waited for 172.362621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes
	I0315 06:12:51.301556   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes
	I0315 06:12:51.301562   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:51.301570   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:51.301577   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:51.305944   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:51.306624   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:12:51.306647   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:12:51.306657   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:12:51.306661   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:12:51.306666   25161 node_conditions.go:105] duration metric: took 177.621595ms to run NodePressure ...
	I0315 06:12:51.306683   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:12:51.306706   25161 start.go:254] writing updated cluster config ...
	I0315 06:12:51.309068   25161 out.go:177] 
	I0315 06:12:51.310799   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:12:51.310895   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:12:51.312662   25161 out.go:177] * Starting "ha-866665-m03" control-plane node in "ha-866665" cluster
	I0315 06:12:51.313873   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:12:51.313891   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:12:51.313994   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:12:51.314007   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:12:51.314110   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:12:51.314268   25161 start.go:360] acquireMachinesLock for ha-866665-m03: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:12:51.314311   25161 start.go:364] duration metric: took 24.232µs to acquireMachinesLock for "ha-866665-m03"
	I0315 06:12:51.314334   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:12:51.314439   25161 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0315 06:12:51.315981   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:12:51.316063   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:12:51.316089   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:12:51.331141   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0315 06:12:51.331538   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:12:51.332014   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:12:51.332036   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:12:51.332346   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:12:51.332539   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:12:51.332703   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:12:51.332943   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:12:51.332970   25161 client.go:168] LocalClient.Create starting
	I0315 06:12:51.333029   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:12:51.333060   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:12:51.333074   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:12:51.333141   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:12:51.333158   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:12:51.333172   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:12:51.333188   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:12:51.333196   25161 main.go:141] libmachine: (ha-866665-m03) Calling .PreCreateCheck
	I0315 06:12:51.333400   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:12:51.333796   25161 main.go:141] libmachine: Creating machine...
	I0315 06:12:51.333811   25161 main.go:141] libmachine: (ha-866665-m03) Calling .Create
	I0315 06:12:51.333947   25161 main.go:141] libmachine: (ha-866665-m03) Creating KVM machine...
	I0315 06:12:51.335286   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found existing default KVM network
	I0315 06:12:51.335475   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found existing private KVM network mk-ha-866665
	I0315 06:12:51.335613   25161 main.go:141] libmachine: (ha-866665-m03) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 ...
	I0315 06:12:51.335663   25161 main.go:141] libmachine: (ha-866665-m03) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:12:51.335739   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.335629   25860 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:12:51.335847   25161 main.go:141] libmachine: (ha-866665-m03) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:12:51.562090   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.561964   25860 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa...
	I0315 06:12:51.780631   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.780514   25860 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/ha-866665-m03.rawdisk...
	I0315 06:12:51.780658   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Writing magic tar header
	I0315 06:12:51.780668   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Writing SSH key tar header
	I0315 06:12:51.780676   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.780648   25860 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 ...
	I0315 06:12:51.780777   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03
	I0315 06:12:51.780796   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:12:51.780804   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 (perms=drwx------)
	I0315 06:12:51.780814   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:12:51.780828   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:12:51.780844   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:12:51.780857   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:12:51.780874   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:12:51.780892   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:12:51.780904   25161 main.go:141] libmachine: (ha-866665-m03) Creating domain...
	I0315 06:12:51.780922   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:12:51.780939   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:12:51.780953   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:12:51.780961   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home
	I0315 06:12:51.780972   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Skipping /home - not owner
	I0315 06:12:51.781798   25161 main.go:141] libmachine: (ha-866665-m03) define libvirt domain using xml: 
	I0315 06:12:51.781829   25161 main.go:141] libmachine: (ha-866665-m03) <domain type='kvm'>
	I0315 06:12:51.781840   25161 main.go:141] libmachine: (ha-866665-m03)   <name>ha-866665-m03</name>
	I0315 06:12:51.781850   25161 main.go:141] libmachine: (ha-866665-m03)   <memory unit='MiB'>2200</memory>
	I0315 06:12:51.781861   25161 main.go:141] libmachine: (ha-866665-m03)   <vcpu>2</vcpu>
	I0315 06:12:51.781877   25161 main.go:141] libmachine: (ha-866665-m03)   <features>
	I0315 06:12:51.781890   25161 main.go:141] libmachine: (ha-866665-m03)     <acpi/>
	I0315 06:12:51.781901   25161 main.go:141] libmachine: (ha-866665-m03)     <apic/>
	I0315 06:12:51.781911   25161 main.go:141] libmachine: (ha-866665-m03)     <pae/>
	I0315 06:12:51.781921   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.781931   25161 main.go:141] libmachine: (ha-866665-m03)   </features>
	I0315 06:12:51.781943   25161 main.go:141] libmachine: (ha-866665-m03)   <cpu mode='host-passthrough'>
	I0315 06:12:51.781955   25161 main.go:141] libmachine: (ha-866665-m03)   
	I0315 06:12:51.781971   25161 main.go:141] libmachine: (ha-866665-m03)   </cpu>
	I0315 06:12:51.782001   25161 main.go:141] libmachine: (ha-866665-m03)   <os>
	I0315 06:12:51.782025   25161 main.go:141] libmachine: (ha-866665-m03)     <type>hvm</type>
	I0315 06:12:51.782047   25161 main.go:141] libmachine: (ha-866665-m03)     <boot dev='cdrom'/>
	I0315 06:12:51.782058   25161 main.go:141] libmachine: (ha-866665-m03)     <boot dev='hd'/>
	I0315 06:12:51.782067   25161 main.go:141] libmachine: (ha-866665-m03)     <bootmenu enable='no'/>
	I0315 06:12:51.782078   25161 main.go:141] libmachine: (ha-866665-m03)   </os>
	I0315 06:12:51.782099   25161 main.go:141] libmachine: (ha-866665-m03)   <devices>
	I0315 06:12:51.782118   25161 main.go:141] libmachine: (ha-866665-m03)     <disk type='file' device='cdrom'>
	I0315 06:12:51.782137   25161 main.go:141] libmachine: (ha-866665-m03)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/boot2docker.iso'/>
	I0315 06:12:51.782149   25161 main.go:141] libmachine: (ha-866665-m03)       <target dev='hdc' bus='scsi'/>
	I0315 06:12:51.782163   25161 main.go:141] libmachine: (ha-866665-m03)       <readonly/>
	I0315 06:12:51.782178   25161 main.go:141] libmachine: (ha-866665-m03)     </disk>
	I0315 06:12:51.782190   25161 main.go:141] libmachine: (ha-866665-m03)     <disk type='file' device='disk'>
	I0315 06:12:51.782204   25161 main.go:141] libmachine: (ha-866665-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:12:51.782221   25161 main.go:141] libmachine: (ha-866665-m03)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/ha-866665-m03.rawdisk'/>
	I0315 06:12:51.782233   25161 main.go:141] libmachine: (ha-866665-m03)       <target dev='hda' bus='virtio'/>
	I0315 06:12:51.782246   25161 main.go:141] libmachine: (ha-866665-m03)     </disk>
	I0315 06:12:51.782258   25161 main.go:141] libmachine: (ha-866665-m03)     <interface type='network'>
	I0315 06:12:51.782293   25161 main.go:141] libmachine: (ha-866665-m03)       <source network='mk-ha-866665'/>
	I0315 06:12:51.782318   25161 main.go:141] libmachine: (ha-866665-m03)       <model type='virtio'/>
	I0315 06:12:51.782351   25161 main.go:141] libmachine: (ha-866665-m03)     </interface>
	I0315 06:12:51.782380   25161 main.go:141] libmachine: (ha-866665-m03)     <interface type='network'>
	I0315 06:12:51.782389   25161 main.go:141] libmachine: (ha-866665-m03)       <source network='default'/>
	I0315 06:12:51.782397   25161 main.go:141] libmachine: (ha-866665-m03)       <model type='virtio'/>
	I0315 06:12:51.782403   25161 main.go:141] libmachine: (ha-866665-m03)     </interface>
	I0315 06:12:51.782410   25161 main.go:141] libmachine: (ha-866665-m03)     <serial type='pty'>
	I0315 06:12:51.782415   25161 main.go:141] libmachine: (ha-866665-m03)       <target port='0'/>
	I0315 06:12:51.782422   25161 main.go:141] libmachine: (ha-866665-m03)     </serial>
	I0315 06:12:51.782428   25161 main.go:141] libmachine: (ha-866665-m03)     <console type='pty'>
	I0315 06:12:51.782435   25161 main.go:141] libmachine: (ha-866665-m03)       <target type='serial' port='0'/>
	I0315 06:12:51.782440   25161 main.go:141] libmachine: (ha-866665-m03)     </console>
	I0315 06:12:51.782450   25161 main.go:141] libmachine: (ha-866665-m03)     <rng model='virtio'>
	I0315 06:12:51.782457   25161 main.go:141] libmachine: (ha-866665-m03)       <backend model='random'>/dev/random</backend>
	I0315 06:12:51.782467   25161 main.go:141] libmachine: (ha-866665-m03)     </rng>
	I0315 06:12:51.782473   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.782481   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.782499   25161 main.go:141] libmachine: (ha-866665-m03)   </devices>
	I0315 06:12:51.782515   25161 main.go:141] libmachine: (ha-866665-m03) </domain>
	I0315 06:12:51.782530   25161 main.go:141] libmachine: (ha-866665-m03) 
	I0315 06:12:51.789529   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:31:f3:07 in network default
	I0315 06:12:51.790092   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring networks are active...
	I0315 06:12:51.790112   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:51.790878   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring network default is active
	I0315 06:12:51.791231   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring network mk-ha-866665 is active
	I0315 06:12:51.791565   25161 main.go:141] libmachine: (ha-866665-m03) Getting domain xml...
	I0315 06:12:51.792423   25161 main.go:141] libmachine: (ha-866665-m03) Creating domain...
	I0315 06:12:53.035150   25161 main.go:141] libmachine: (ha-866665-m03) Waiting to get IP...
	I0315 06:12:53.036020   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.036527   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.036579   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.036522   25860 retry.go:31] will retry after 298.311457ms: waiting for machine to come up
	I0315 06:12:53.336016   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.336500   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.336523   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.336440   25860 retry.go:31] will retry after 281.788443ms: waiting for machine to come up
	I0315 06:12:53.620158   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.620721   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.620757   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.620683   25860 retry.go:31] will retry after 323.523218ms: waiting for machine to come up
	I0315 06:12:53.946180   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.946609   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.946643   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.946564   25860 retry.go:31] will retry after 451.748742ms: waiting for machine to come up
	I0315 06:12:54.400183   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:54.400665   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:54.400694   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:54.400619   25860 retry.go:31] will retry after 691.034866ms: waiting for machine to come up
	I0315 06:12:55.093354   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:55.093808   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:55.093835   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:55.093767   25860 retry.go:31] will retry after 634.767961ms: waiting for machine to come up
	I0315 06:12:55.729919   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:55.730365   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:55.730409   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:55.730308   25860 retry.go:31] will retry after 874.474327ms: waiting for machine to come up
	I0315 06:12:56.606554   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:56.606937   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:56.606965   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:56.606882   25860 retry.go:31] will retry after 1.259625025s: waiting for machine to come up
	I0315 06:12:57.868160   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:57.868623   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:57.868653   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:57.868582   25860 retry.go:31] will retry after 1.730370758s: waiting for machine to come up
	I0315 06:12:59.601624   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:59.602133   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:59.602158   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:59.602095   25860 retry.go:31] will retry after 1.898634494s: waiting for machine to come up
	I0315 06:13:01.502182   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:01.502681   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:01.502709   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:01.502645   25860 retry.go:31] will retry after 2.001541934s: waiting for machine to come up
	I0315 06:13:03.505961   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:03.506334   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:03.506363   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:03.506283   25860 retry.go:31] will retry after 2.795851868s: waiting for machine to come up
	I0315 06:13:06.305236   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:06.305602   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:06.305619   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:06.305587   25860 retry.go:31] will retry after 4.303060634s: waiting for machine to come up
	I0315 06:13:10.609875   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:10.610290   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:10.610311   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:10.610255   25860 retry.go:31] will retry after 5.533964577s: waiting for machine to come up
	I0315 06:13:16.145959   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.146672   25161 main.go:141] libmachine: (ha-866665-m03) Found IP for machine: 192.168.39.89
	I0315 06:13:16.146704   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has current primary IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.146713   25161 main.go:141] libmachine: (ha-866665-m03) Reserving static IP address...
	I0315 06:13:16.147097   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find host DHCP lease matching {name: "ha-866665-m03", mac: "52:54:00:76:48:bb", ip: "192.168.39.89"} in network mk-ha-866665
	I0315 06:13:16.224039   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Getting to WaitForSSH function...
	I0315 06:13:16.224069   25161 main.go:141] libmachine: (ha-866665-m03) Reserved static IP address: 192.168.39.89
	I0315 06:13:16.224081   25161 main.go:141] libmachine: (ha-866665-m03) Waiting for SSH to be available...
	I0315 06:13:16.227293   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.227831   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.227861   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.228100   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using SSH client type: external
	I0315 06:13:16.228126   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa (-rw-------)
	I0315 06:13:16.228153   25161 main.go:141] libmachine: (ha-866665-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:13:16.228167   25161 main.go:141] libmachine: (ha-866665-m03) DBG | About to run SSH command:
	I0315 06:13:16.228182   25161 main.go:141] libmachine: (ha-866665-m03) DBG | exit 0
	I0315 06:13:16.360633   25161 main.go:141] libmachine: (ha-866665-m03) DBG | SSH cmd err, output: <nil>: 
	I0315 06:13:16.360894   25161 main.go:141] libmachine: (ha-866665-m03) KVM machine creation complete!
	I0315 06:13:16.361233   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:13:16.361739   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:16.361905   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:16.362037   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:13:16.362079   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:13:16.363397   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:13:16.363414   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:13:16.363421   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:13:16.363427   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.365926   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.366337   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.366369   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.366516   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.366712   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.366872   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.367008   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.367121   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.367391   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.367404   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:13:16.483839   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:13:16.483866   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:13:16.483876   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.486968   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.487349   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.487372   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.487482   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.487675   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.487823   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.487996   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.488192   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.488353   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.488365   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:13:16.605506   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:13:16.605595   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:13:16.605610   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:13:16.605622   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.605919   25161 buildroot.go:166] provisioning hostname "ha-866665-m03"
	I0315 06:13:16.605947   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.606123   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.608659   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.609100   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.609137   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.609194   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.609394   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.609567   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.609731   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.609910   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.610068   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.610079   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665-m03 && echo "ha-866665-m03" | sudo tee /etc/hostname
	I0315 06:13:16.741484   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665-m03
	
	I0315 06:13:16.741514   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.744403   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.744887   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.744916   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.745131   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.745316   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.745462   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.745600   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.745780   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.745948   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.745968   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:13:16.872038   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:13:16.872077   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:13:16.872093   25161 buildroot.go:174] setting up certificates
	I0315 06:13:16.872103   25161 provision.go:84] configureAuth start
	I0315 06:13:16.872112   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.872366   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:16.875149   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.875549   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.875578   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.875796   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.878408   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.878796   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.878826   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.878959   25161 provision.go:143] copyHostCerts
	I0315 06:13:16.878989   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:13:16.879030   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:13:16.879051   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:13:16.879133   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:13:16.879263   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:13:16.879290   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:13:16.879300   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:13:16.879348   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:13:16.879447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:13:16.879474   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:13:16.879480   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:13:16.879515   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:13:16.879611   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665-m03 san=[127.0.0.1 192.168.39.89 ha-866665-m03 localhost minikube]
	I0315 06:13:17.071846   25161 provision.go:177] copyRemoteCerts
	I0315 06:13:17.071907   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:13:17.071930   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.074848   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.075190   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.075220   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.075462   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.075687   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.075843   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.075966   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.162763   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:13:17.162827   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:13:17.189144   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:13:17.189229   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 06:13:17.217003   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:13:17.217064   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:13:17.243067   25161 provision.go:87] duration metric: took 370.952795ms to configureAuth
	I0315 06:13:17.243129   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:13:17.243358   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:17.243439   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.246118   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.246494   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.246529   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.246689   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.246863   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.247008   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.247186   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.247353   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:17.247503   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:17.247518   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:13:17.548364   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:13:17.548399   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:13:17.548411   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetURL
	I0315 06:13:17.549886   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using libvirt version 6000000
	I0315 06:13:17.552092   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.552605   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.552634   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.552775   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:13:17.552787   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:13:17.552793   25161 client.go:171] duration metric: took 26.219813183s to LocalClient.Create
	I0315 06:13:17.552814   25161 start.go:167] duration metric: took 26.21987276s to libmachine.API.Create "ha-866665"
	I0315 06:13:17.552827   25161 start.go:293] postStartSetup for "ha-866665-m03" (driver="kvm2")
	I0315 06:13:17.552840   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:13:17.552860   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.553089   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:13:17.553112   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.555406   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.555833   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.555863   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.555982   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.556159   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.556331   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.556487   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.645620   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:13:17.650150   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:13:17.650175   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:13:17.650269   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:13:17.650361   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:13:17.650373   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:13:17.650473   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:13:17.660972   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:13:17.686274   25161 start.go:296] duration metric: took 133.43279ms for postStartSetup
	I0315 06:13:17.686339   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:13:17.686914   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:17.690246   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.690732   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.690768   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.691087   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:13:17.691296   25161 start.go:128] duration metric: took 26.376846774s to createHost
	I0315 06:13:17.691321   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.693732   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.694136   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.694167   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.694333   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.694484   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.694662   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.694810   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.694986   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:17.695155   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:17.695166   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:13:17.817650   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483197.793014240
	
	I0315 06:13:17.817676   25161 fix.go:216] guest clock: 1710483197.793014240
	I0315 06:13:17.817686   25161 fix.go:229] Guest: 2024-03-15 06:13:17.79301424 +0000 UTC Remote: 2024-03-15 06:13:17.691310036 +0000 UTC m=+175.689578469 (delta=101.704204ms)
	I0315 06:13:17.817709   25161 fix.go:200] guest clock delta is within tolerance: 101.704204ms
	I0315 06:13:17.817717   25161 start.go:83] releasing machines lock for "ha-866665-m03", held for 26.503394445s
	I0315 06:13:17.817741   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.818005   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:17.820569   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.820956   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.820993   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.823575   25161 out.go:177] * Found network options:
	I0315 06:13:17.825308   25161 out.go:177]   - NO_PROXY=192.168.39.78,192.168.39.27
	W0315 06:13:17.826923   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 06:13:17.826942   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:13:17.826955   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827544   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827752   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827852   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:13:17.827888   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	W0315 06:13:17.827969   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 06:13:17.827994   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:13:17.828056   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:13:17.828078   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.830849   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.830955   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831208   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.831246   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831393   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.831503   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.831527   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831563   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.831758   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.831760   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.831966   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.831955   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.832132   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.832324   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:18.085787   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:13:18.092348   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:13:18.092432   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:13:18.110796   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:13:18.110825   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:13:18.110906   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:13:18.130014   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:13:18.144546   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:13:18.144603   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:13:18.160376   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:13:18.175139   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:13:18.307170   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:13:18.480533   25161 docker.go:233] disabling docker service ...
	I0315 06:13:18.480607   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:13:18.496871   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:13:18.512932   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:13:18.652631   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:13:18.784108   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:13:18.799682   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:13:18.821219   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:13:18.821290   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.832880   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:13:18.832951   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.844364   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.855802   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.868166   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:13:18.879160   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:13:18.889700   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:13:18.889769   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:13:18.905254   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:13:18.916136   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:19.062538   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:13:19.219783   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:13:19.219860   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:13:19.225963   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:13:19.226038   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:13:19.230678   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:13:19.271407   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:13:19.271485   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:13:19.304639   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:13:19.343075   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:13:19.344917   25161 out.go:177]   - env NO_PROXY=192.168.39.78
	I0315 06:13:19.346592   25161 out.go:177]   - env NO_PROXY=192.168.39.78,192.168.39.27
	I0315 06:13:19.348317   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:19.351550   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:19.351969   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:19.352005   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:19.352278   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:13:19.357181   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:13:19.371250   25161 mustload.go:65] Loading cluster: ha-866665
	I0315 06:13:19.371465   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:19.371703   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:19.371741   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:19.387368   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0315 06:13:19.387853   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:19.388336   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:19.388351   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:19.388758   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:19.388940   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:13:19.390736   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:13:19.391070   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:19.391119   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:19.406949   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0315 06:13:19.407440   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:19.407986   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:19.408009   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:19.408382   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:19.408570   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:13:19.408770   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.89
	I0315 06:13:19.408787   25161 certs.go:194] generating shared ca certs ...
	I0315 06:13:19.408804   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.408959   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:13:19.409018   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:13:19.409031   25161 certs.go:256] generating profile certs ...
	I0315 06:13:19.409130   25161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:13:19.409166   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4
	I0315 06:13:19.409187   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:13:19.601873   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 ...
	I0315 06:13:19.601901   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4: {Name:mk3a9401e785e81d9d4b250b9aabdd54331f0925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.602059   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4 ...
	I0315 06:13:19.602071   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4: {Name:mk6d7a4285f4b6cc1db493575ebcf69c5f0eb90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.602134   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:13:19.602264   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:13:19.602380   25161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:13:19.602395   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:13:19.602406   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:13:19.602416   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:13:19.602425   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:13:19.602435   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:13:19.602447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:13:19.602461   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:13:19.602470   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:13:19.602530   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:13:19.602557   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:13:19.602566   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:13:19.602588   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:13:19.602609   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:13:19.602631   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:13:19.602669   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:13:19.602695   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:13:19.602710   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:13:19.602723   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:19.602752   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:13:19.606208   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:19.606767   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:13:19.606808   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:19.607044   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:13:19.607256   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:13:19.607383   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:13:19.607621   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:13:19.680841   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 06:13:19.686663   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 06:13:19.699918   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 06:13:19.704654   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 06:13:19.719942   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 06:13:19.724961   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 06:13:19.739220   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 06:13:19.744145   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0315 06:13:19.757712   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 06:13:19.763027   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 06:13:19.777923   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 06:13:19.782472   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0315 06:13:19.794362   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:13:19.822600   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:13:19.850637   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:13:19.879297   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:13:19.906629   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0315 06:13:19.933751   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:13:19.959528   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:13:19.987312   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:13:20.016093   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:13:20.046080   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:13:20.076406   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:13:20.104494   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 06:13:20.123584   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 06:13:20.143595   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 06:13:20.162301   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0315 06:13:20.182440   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 06:13:20.201422   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0315 06:13:20.222325   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 06:13:20.243409   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:13:20.249530   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:13:20.262093   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.266970   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.267032   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.273065   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:13:20.286946   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:13:20.300302   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.305424   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.305485   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.311885   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:13:20.325415   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:13:20.339226   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.344845   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.344908   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.351216   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:13:20.365073   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:13:20.370323   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:13:20.370378   25161 kubeadm.go:928] updating node {m03 192.168.39.89 8443 v1.28.4 crio true true} ...
	I0315 06:13:20.370464   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:13:20.370490   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:13:20.370536   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:13:20.390769   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:13:20.390844   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:13:20.390920   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:13:20.402252   25161 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 06:13:20.402322   25161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 06:13:20.413609   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 06:13:20.413634   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0315 06:13:20.413641   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:13:20.413682   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:13:20.413727   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:13:20.413609   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0315 06:13:20.413771   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:13:20.413860   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:13:20.418768   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 06:13:20.418804   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 06:13:20.444447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:13:20.444452   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 06:13:20.444545   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 06:13:20.444585   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:13:20.508056   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 06:13:20.508106   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 06:13:21.483097   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 06:13:21.494291   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 06:13:21.516613   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:13:21.536637   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:13:21.556286   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:13:21.561424   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:13:21.575899   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:21.711123   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:13:21.730533   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:13:21.730862   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:21.730910   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:21.746267   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0315 06:13:21.746738   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:21.747231   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:21.747254   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:21.747637   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:21.747857   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:13:21.748031   25161 start.go:316] joinCluster: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:13:21.748187   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 06:13:21.748212   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:13:21.751415   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:21.751947   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:13:21.751973   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:21.752155   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:13:21.752320   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:13:21.752515   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:13:21.752676   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:13:21.916601   25161 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:13:21.916650   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kr2r6t.3p96coeihyw3qpvz --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I0315 06:13:50.038289   25161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kr2r6t.3p96coeihyw3qpvz --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (28.121613009s)
	I0315 06:13:50.038330   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 06:13:50.529373   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m03 minikube.k8s.io/updated_at=2024_03_15T06_13_50_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false
	I0315 06:13:50.675068   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-866665-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 06:13:50.783667   25161 start.go:318] duration metric: took 29.035633105s to joinCluster
	I0315 06:13:50.783744   25161 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:13:50.785272   25161 out.go:177] * Verifying Kubernetes components...
	I0315 06:13:50.784078   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:50.786680   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:51.048820   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:13:51.065661   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:13:51.065880   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 06:13:51.065935   25161 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.78:8443
	I0315 06:13:51.066136   25161 node_ready.go:35] waiting up to 6m0s for node "ha-866665-m03" to be "Ready" ...
	I0315 06:13:51.066208   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:51.066219   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:51.066230   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:51.066239   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:51.070343   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:51.567067   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:51.567092   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:51.567110   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:51.567115   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:51.571135   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:52.067200   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:52.067219   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:52.067227   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:52.067230   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:52.071116   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:52.567046   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:52.567068   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:52.567076   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:52.567080   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:52.571252   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:53.066954   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:53.066976   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:53.066986   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:53.066993   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:53.071221   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:53.072130   25161 node_ready.go:53] node "ha-866665-m03" has status "Ready":"False"
	I0315 06:13:53.566345   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:53.566373   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:53.566385   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:53.566392   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:53.571000   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:54.066700   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:54.066723   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:54.066731   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:54.066735   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:54.070373   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:54.566329   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:54.566354   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:54.566365   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:54.566371   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:54.571077   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:55.067093   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:55.067115   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:55.067123   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:55.067126   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:55.071034   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:55.567255   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:55.567278   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:55.567285   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:55.567290   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:55.570915   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:55.571668   25161 node_ready.go:53] node "ha-866665-m03" has status "Ready":"False"
	I0315 06:13:56.066954   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.066973   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.066981   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.066985   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.070691   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.071415   25161 node_ready.go:49] node "ha-866665-m03" has status "Ready":"True"
	I0315 06:13:56.071435   25161 node_ready.go:38] duration metric: took 5.005282027s for node "ha-866665-m03" to be "Ready" ...
	I0315 06:13:56.071444   25161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:13:56.071520   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:13:56.071532   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.071542   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.071554   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.078886   25161 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0315 06:13:56.085590   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.085671   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mgthb
	I0315 06:13:56.085680   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.085688   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.085693   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.089325   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.089998   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.090014   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.090021   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.090025   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.092988   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:13:56.093428   25161 pod_ready.go:92] pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.093444   25161 pod_ready.go:81] duration metric: took 7.831568ms for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.093453   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.093537   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r57px
	I0315 06:13:56.093551   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.093561   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.093568   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.096866   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.097525   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.097544   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.097555   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.097559   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.101060   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.101959   25161 pod_ready.go:92] pod "coredns-5dd5756b68-r57px" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.101978   25161 pod_ready.go:81] duration metric: took 8.51782ms for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.101990   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.102051   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665
	I0315 06:13:56.102062   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.102072   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.102082   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.107567   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:56.108157   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.108173   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.108183   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.108187   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.112528   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.113299   25161 pod_ready.go:92] pod "etcd-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.113314   25161 pod_ready.go:81] duration metric: took 11.317379ms for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.113324   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.113368   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m02
	I0315 06:13:56.113375   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.113383   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.113386   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.118160   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.119257   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:56.119272   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.119279   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.119282   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.122864   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.123414   25161 pod_ready.go:92] pod "etcd-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.123431   25161 pod_ready.go:81] duration metric: took 10.102076ms for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.123440   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.267803   25161 request.go:629] Waited for 144.311021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.267873   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.267883   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.267891   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.267895   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.271386   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.467461   25161 request.go:629] Waited for 195.39417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.467526   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.467533   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.467541   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.467547   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.471981   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.666996   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.667016   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.667030   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.667039   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.672207   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:56.867209   25161 request.go:629] Waited for 194.291173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.867300   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.867310   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.867317   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.867325   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.870748   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.123654   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:57.123676   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.123684   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.123688   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.127313   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.267584   25161 request.go:629] Waited for 139.352755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.267646   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.267664   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.267671   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.267675   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.271352   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.623926   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:57.623948   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.623957   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.623963   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.629129   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:57.667927   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.667958   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.667964   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.667968   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.671784   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.123940   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:58.123962   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.123970   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.123975   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.127633   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.128261   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:58.128275   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.128281   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.128284   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.131681   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.132582   25161 pod_ready.go:102] pod "etcd-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:13:58.623697   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:58.623719   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.623728   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.623732   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.627712   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.628448   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:58.628480   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.628492   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.628499   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.631686   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.632296   25161 pod_ready.go:92] pod "etcd-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:58.632314   25161 pod_ready.go:81] duration metric: took 2.508868218s for pod "etcd-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.632330   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.667659   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:13:58.667681   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.667689   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.667695   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.671600   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.867965   25161 request.go:629] Waited for 195.346208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:58.868025   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:58.868031   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.868039   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.868044   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.872066   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:58.872619   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:58.872636   25161 pod_ready.go:81] duration metric: took 240.300208ms for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.872645   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.066992   25161 request.go:629] Waited for 194.282943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:13:59.067065   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:13:59.067077   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.067086   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.067095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.070872   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:59.267994   25161 request.go:629] Waited for 196.368377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:59.268061   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:59.268071   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.268084   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.268094   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.272096   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:59.272687   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:59.272712   25161 pod_ready.go:81] duration metric: took 400.060283ms for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.272727   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.467838   25161 request.go:629] Waited for 195.03102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.467911   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.467917   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.467925   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.467930   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.472237   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:59.667365   25161 request.go:629] Waited for 194.371732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:59.667427   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:59.667435   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.667448   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.667454   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.671634   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:59.867424   25161 request.go:629] Waited for 94.276848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.867493   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.867500   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.867510   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.867516   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.871467   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.067802   25161 request.go:629] Waited for 195.400399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.067897   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.067916   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.067926   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.067932   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.071709   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.273311   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:00.273335   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.273344   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.273348   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.278307   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:00.467629   25161 request.go:629] Waited for 188.376209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.467685   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.467691   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.467701   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.467711   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.471740   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:00.773689   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:00.773711   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.773719   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.773722   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.777511   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.867555   25161 request.go:629] Waited for 89.227235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.867628   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.867634   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.867641   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.867645   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.871502   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.273450   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:01.273477   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.273503   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.273510   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.277314   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.278175   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:01.278193   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.278203   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.278209   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.281480   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.282027   25161 pod_ready.go:102] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:14:01.773594   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:01.773616   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.773623   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.773627   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.777711   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:01.778569   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:01.778604   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.778614   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.778623   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.781948   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.273959   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:02.273985   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.273993   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.273998   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.277909   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.279046   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:02.279064   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.279071   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.279075   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.282065   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:14:02.773586   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:02.773607   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.773622   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.773628   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.777171   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.777964   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:02.777977   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.777984   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.777988   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.781353   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:03.273793   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:03.273816   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.273825   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.273829   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.278546   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.279438   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:03.279456   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.279467   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.279472   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.283715   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.284165   25161 pod_ready.go:102] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:14:03.773330   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:03.773356   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.773367   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.773373   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.777562   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.778583   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:03.778604   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.778615   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.778622   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.782127   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.273595   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:04.273618   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.273627   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.273632   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.277682   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:04.278477   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.278510   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.278522   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.278528   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.283682   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:04.284233   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.284252   25161 pod_ready.go:81] duration metric: took 5.011513967s for pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.284261   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.284314   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:14:04.284322   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.284329   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.284333   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.287542   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.288016   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:04.288031   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.288038   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.288041   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.291184   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.291801   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.291822   25161 pod_ready.go:81] duration metric: took 7.55545ms for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.291833   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.291882   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:14:04.291889   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.291895   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.291904   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.294962   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.467644   25161 request.go:629] Waited for 171.948514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:04.467696   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:04.467702   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.467717   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.467721   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.472005   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:04.472461   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.472507   25161 pod_ready.go:81] duration metric: took 180.666536ms for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.472518   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.667987   25161 request.go:629] Waited for 195.400575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:04.668039   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:04.668045   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.668055   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.668059   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.671954   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.868032   25161 request.go:629] Waited for 195.436161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.868127   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.868135   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.868147   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.868155   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.872533   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.067526   25161 request.go:629] Waited for 94.337643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:05.067591   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:05.067597   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.067608   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.067613   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.071591   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:05.267695   25161 request.go:629] Waited for 195.366025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.267748   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.267759   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.267768   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.267774   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.272158   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.273061   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:05.273086   25161 pod_ready.go:81] duration metric: took 800.560339ms for pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.273100   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wxfg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.467586   25161 request.go:629] Waited for 194.422691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wxfg
	I0315 06:14:05.467681   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wxfg
	I0315 06:14:05.467694   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.467705   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.467717   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.471891   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.667940   25161 request.go:629] Waited for 195.377355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.668005   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.668011   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.668018   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.668024   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.674197   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:14:05.674739   25161 pod_ready.go:92] pod "kube-proxy-6wxfg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:05.674757   25161 pod_ready.go:81] duration metric: took 401.647952ms for pod "kube-proxy-6wxfg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.674769   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.868045   25161 request.go:629] Waited for 193.209712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:14:05.868130   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:14:05.868135   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.868142   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.868147   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.878231   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:14:06.067405   25161 request.go:629] Waited for 187.322806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:06.067484   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:06.067490   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.067497   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.067501   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.071957   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:06.072441   25161 pod_ready.go:92] pod "kube-proxy-lqzk8" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.072482   25161 pod_ready.go:81] duration metric: took 397.687128ms for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.072497   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.267564   25161 request.go:629] Waited for 194.989792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:14:06.267625   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:14:06.267630   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.267637   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.267642   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.271381   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:06.467810   25161 request.go:629] Waited for 195.461072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.467911   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.467925   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.467935   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.467943   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.471989   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:06.472812   25161 pod_ready.go:92] pod "kube-proxy-sbxgg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.472831   25161 pod_ready.go:81] duration metric: took 400.326596ms for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.472843   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.666996   25161 request.go:629] Waited for 194.085115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:14:06.667074   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:14:06.667079   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.667087   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.667094   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.671048   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:06.866969   25161 request.go:629] Waited for 195.186475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.867065   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.867087   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.867095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.867106   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.873323   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:14:06.873883   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.873905   25161 pod_ready.go:81] duration metric: took 401.054482ms for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.873915   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.067349   25161 request.go:629] Waited for 193.371689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:14:07.067423   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:14:07.067430   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.067440   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.067447   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.071395   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:07.267670   25161 request.go:629] Waited for 195.463984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:07.267734   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:07.267741   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.267750   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.267757   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.271416   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:07.272074   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:07.272093   25161 pod_ready.go:81] duration metric: took 398.171188ms for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.272105   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.467215   25161 request.go:629] Waited for 195.044748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m03
	I0315 06:14:07.467288   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m03
	I0315 06:14:07.467294   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.467302   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.467306   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.472949   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:07.666968   25161 request.go:629] Waited for 193.372356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:07.667064   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:07.667081   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.667091   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.667100   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.677989   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:14:07.678508   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:07.678529   25161 pod_ready.go:81] duration metric: took 406.417977ms for pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.678541   25161 pod_ready.go:38] duration metric: took 11.60708612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:14:07.678556   25161 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:14:07.678636   25161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:14:07.700935   25161 api_server.go:72] duration metric: took 16.917153632s to wait for apiserver process to appear ...
	I0315 06:14:07.700961   25161 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:14:07.700984   25161 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0315 06:14:07.711901   25161 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0315 06:14:07.711991   25161 round_trippers.go:463] GET https://192.168.39.78:8443/version
	I0315 06:14:07.711998   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.712007   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.712012   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.714787   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:14:07.714937   25161 api_server.go:141] control plane version: v1.28.4
	I0315 06:14:07.714960   25161 api_server.go:131] duration metric: took 13.992544ms to wait for apiserver health ...
	I0315 06:14:07.714969   25161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:14:07.867219   25161 request.go:629] Waited for 152.185848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:07.867277   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:07.867282   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.867289   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.867293   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.876492   25161 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 06:14:07.883095   25161 system_pods.go:59] 24 kube-system pods found
	I0315 06:14:07.883126   25161 system_pods.go:61] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:14:07.883132   25161 system_pods.go:61] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:14:07.883136   25161 system_pods.go:61] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:14:07.883141   25161 system_pods.go:61] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:14:07.883145   25161 system_pods.go:61] "etcd-ha-866665-m03" [20f9ca29-a258-454a-a497-22ad15f35c6d] Running
	I0315 06:14:07.883148   25161 system_pods.go:61] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:14:07.883151   25161 system_pods.go:61] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:14:07.883153   25161 system_pods.go:61] "kindnet-qr9qm" [bd816497-5a8b-4028-9fa5-d4f5739b651e] Running
	I0315 06:14:07.883156   25161 system_pods.go:61] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:14:07.883159   25161 system_pods.go:61] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:14:07.883162   25161 system_pods.go:61] "kube-apiserver-ha-866665-m03" [03abb17f-377c-422b-9e2a-2c837bafa855] Running
	I0315 06:14:07.883165   25161 system_pods.go:61] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:14:07.883168   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:14:07.883171   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m03" [e09a088d-2fd3-4abb-a4d6-796ec9a94544] Running
	I0315 06:14:07.883173   25161 system_pods.go:61] "kube-proxy-6wxfg" [ee19b698-ba60-4edb-bb37-d9ca6a1793b2] Running
	I0315 06:14:07.883176   25161 system_pods.go:61] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:14:07.883178   25161 system_pods.go:61] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:14:07.883182   25161 system_pods.go:61] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:14:07.883185   25161 system_pods.go:61] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:14:07.883189   25161 system_pods.go:61] "kube-scheduler-ha-866665-m03" [9e7712b2-d794-4544-9044-6a5acf281303] Running
	I0315 06:14:07.883191   25161 system_pods.go:61] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:14:07.883195   25161 system_pods.go:61] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:14:07.883197   25161 system_pods.go:61] "kube-vip-ha-866665-m03" [73e7ac10-6df8-440e-98af-b3724499b73e] Running
	I0315 06:14:07.883200   25161 system_pods.go:61] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:14:07.883206   25161 system_pods.go:74] duration metric: took 168.231276ms to wait for pod list to return data ...
	I0315 06:14:07.883213   25161 default_sa.go:34] waiting for default service account to be created ...
	I0315 06:14:08.067727   25161 request.go:629] Waited for 184.450892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:14:08.067890   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:14:08.067908   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.067915   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.067920   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.072178   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:08.072304   25161 default_sa.go:45] found service account: "default"
	I0315 06:14:08.072323   25161 default_sa.go:55] duration metric: took 189.104157ms for default service account to be created ...
	I0315 06:14:08.072337   25161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 06:14:08.267770   25161 request.go:629] Waited for 195.367442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:08.267840   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:08.267846   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.267853   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.267857   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.275938   25161 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0315 06:14:08.282381   25161 system_pods.go:86] 24 kube-system pods found
	I0315 06:14:08.282413   25161 system_pods.go:89] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:14:08.282419   25161 system_pods.go:89] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:14:08.282424   25161 system_pods.go:89] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:14:08.282429   25161 system_pods.go:89] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:14:08.282434   25161 system_pods.go:89] "etcd-ha-866665-m03" [20f9ca29-a258-454a-a497-22ad15f35c6d] Running
	I0315 06:14:08.282438   25161 system_pods.go:89] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:14:08.282442   25161 system_pods.go:89] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:14:08.282445   25161 system_pods.go:89] "kindnet-qr9qm" [bd816497-5a8b-4028-9fa5-d4f5739b651e] Running
	I0315 06:14:08.282449   25161 system_pods.go:89] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:14:08.282453   25161 system_pods.go:89] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:14:08.282457   25161 system_pods.go:89] "kube-apiserver-ha-866665-m03" [03abb17f-377c-422b-9e2a-2c837bafa855] Running
	I0315 06:14:08.282461   25161 system_pods.go:89] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:14:08.282464   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:14:08.282468   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m03" [e09a088d-2fd3-4abb-a4d6-796ec9a94544] Running
	I0315 06:14:08.282472   25161 system_pods.go:89] "kube-proxy-6wxfg" [ee19b698-ba60-4edb-bb37-d9ca6a1793b2] Running
	I0315 06:14:08.282475   25161 system_pods.go:89] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:14:08.282479   25161 system_pods.go:89] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:14:08.282482   25161 system_pods.go:89] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:14:08.282485   25161 system_pods.go:89] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:14:08.282489   25161 system_pods.go:89] "kube-scheduler-ha-866665-m03" [9e7712b2-d794-4544-9044-6a5acf281303] Running
	I0315 06:14:08.282493   25161 system_pods.go:89] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:14:08.282496   25161 system_pods.go:89] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:14:08.282500   25161 system_pods.go:89] "kube-vip-ha-866665-m03" [73e7ac10-6df8-440e-98af-b3724499b73e] Running
	I0315 06:14:08.282503   25161 system_pods.go:89] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:14:08.282510   25161 system_pods.go:126] duration metric: took 210.167958ms to wait for k8s-apps to be running ...
	I0315 06:14:08.282517   25161 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 06:14:08.282563   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:14:08.302719   25161 system_svc.go:56] duration metric: took 20.192329ms WaitForService to wait for kubelet
	I0315 06:14:08.302752   25161 kubeadm.go:576] duration metric: took 17.518975971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:14:08.302777   25161 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:14:08.467146   25161 request.go:629] Waited for 164.280557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes
	I0315 06:14:08.467202   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes
	I0315 06:14:08.467208   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.467215   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.467218   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.472514   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:08.473633   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473655   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473665   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473668   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473671   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473675   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473678   25161 node_conditions.go:105] duration metric: took 170.896148ms to run NodePressure ...
	I0315 06:14:08.473689   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:14:08.473708   25161 start.go:254] writing updated cluster config ...
	I0315 06:14:08.474060   25161 ssh_runner.go:195] Run: rm -f paused
	I0315 06:14:08.528488   25161 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 06:14:08.530950   25161 out.go:177] * Done! kubectl is now configured to use "ha-866665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.830688418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483458830661496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79bf83af-fa5a-47eb-9cba-1585e0c398bc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.831169212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eca98e60-067b-43c9-bdea-1a65a44bd448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.831347159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eca98e60-067b-43c9-bdea-1a65a44bd448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.831698322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eca98e60-067b-43c9-bdea-1a65a44bd448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.871160248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51bbc053-0071-447b-8e63-b73b670efe2f name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.871283733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51bbc053-0071-447b-8e63-b73b670efe2f name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.872817131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dcf790a-a06d-435e-b060-65421f0341a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.873485750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483458873401793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dcf790a-a06d-435e-b060-65421f0341a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.874022128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ee2bc65-d2ef-485c-9766-944b3bdc0eb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.874077577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ee2bc65-d2ef-485c-9766-944b3bdc0eb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.874397753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ee2bc65-d2ef-485c-9766-944b3bdc0eb3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.916418149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41a04711-ee20-44d4-92bd-86666b849869 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.916495840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41a04711-ee20-44d4-92bd-86666b849869 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.917538836Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b1a0830-8b9c-4b38-bcd0-69cad1cef6cc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.918003633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483458917978071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b1a0830-8b9c-4b38-bcd0-69cad1cef6cc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.918505995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a84d38a-1f86-4d05-858b-a09b1219debf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.918585227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a84d38a-1f86-4d05-858b-a09b1219debf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.918873584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a84d38a-1f86-4d05-858b-a09b1219debf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.964735830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c8f32c6-607b-4d91-a266-96ffb1699552 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.964845255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c8f32c6-607b-4d91-a266-96ffb1699552 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.965866006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=585f3684-e271-4a45-a802-8d0c549594b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.967570694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483458967543273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=585f3684-e271-4a45-a802-8d0c549594b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.968298349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfad8e82-65bd-4992-9a80-a1e821bbf391 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.968376329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfad8e82-65bd-4992-9a80-a1e821bbf391 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:17:38 ha-866665 crio[677]: time="2024-03-15 06:17:38.968638220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfad8e82-65bd-4992-9a80-a1e821bbf391 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3893d7b08f562       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4b1a833979698       busybox-5b5d89c9d6-82knb
	21104767a9371       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   c25a805c38573       storage-provisioner
	652c2ee94f6f3       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   2095201e88b51       kube-vip-ha-866665
	bede6c7f8912b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   89474c2214060       coredns-5dd5756b68-r57px
	c0ecd2e858892       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   72c22c098aee5       coredns-5dd5756b68-mgthb
	2a4339afd096a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       0                   c25a805c38573       storage-provisioner
	7b60508bed4fc       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   cb12b8ff5eaf3       kindnet-9nvvx
	c07640cff4ced       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   e15b87fb1896f       kube-proxy-sbxgg
	a90e2aa6abce9       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   2095201e88b51       kube-vip-ha-866665
	7fcd79ed43f7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   97bf2aa8738ce       kube-scheduler-ha-866665
	adc8145247000       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   682c38a8f4263       etcd-ha-866665
	b639b306bcc41       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   cb596bb7a70bf       kube-apiserver-ha-866665
	dddbd40f934ba       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   209132c5db247       kube-controller-manager-ha-866665
	
	
	==> coredns [bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780] <==
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53874 - 31945 "HINFO IN 7631167108013983909.4597778027584677041. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009476454s
	[INFO] 10.244.0.4:38164 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009591847s
	[INFO] 10.244.1.2:58652 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000766589s
	[INFO] 10.244.1.2:51069 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001862794s
	[INFO] 10.244.0.4:39512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00055199s
	[INFO] 10.244.0.4:46188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133209s
	[INFO] 10.244.0.4:45008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008468s
	[INFO] 10.244.0.4:37076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097079s
	[INFO] 10.244.1.2:45388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815413s
	[INFO] 10.244.1.2:40983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165928s
	[INFO] 10.244.1.2:41822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199064s
	[INFO] 10.244.1.2:51003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093469s
	[INFO] 10.244.2.2:52723 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155039s
	[INFO] 10.244.2.2:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105876s
	[INFO] 10.244.2.2:40110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118647s
	[INFO] 10.244.1.2:48735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190723s
	[INFO] 10.244.1.2:59420 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115761s
	[INFO] 10.244.1.2:44465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090898s
	[INFO] 10.244.2.2:55054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145748s
	[INFO] 10.244.2.2:48352 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081059s
	[INFO] 10.244.0.4:53797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115756s
	[INFO] 10.244.0.4:52841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114315s
	[INFO] 10.244.1.2:34071 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158733s
	[INFO] 10.244.2.2:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239839s
	
	
	==> coredns [c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90] <==
	[INFO] 10.244.0.4:57992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002676087s
	[INFO] 10.244.1.2:60882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021158s
	[INFO] 10.244.1.2:57314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002029124s
	[INFO] 10.244.1.2:55031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271586s
	[INFO] 10.244.1.2:33215 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203658s
	[INFO] 10.244.2.2:48404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148272s
	[INFO] 10.244.2.2:45614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171944s
	[INFO] 10.244.2.2:42730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	[INFO] 10.244.2.2:38361 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605049s
	[INFO] 10.244.2.2:54334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:51787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138576s
	[INFO] 10.244.0.4:35351 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081934s
	[INFO] 10.244.0.4:56185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140731s
	[INFO] 10.244.0.4:49966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062146s
	[INFO] 10.244.1.2:35089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123543s
	[INFO] 10.244.2.2:59029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184488s
	[INFO] 10.244.2.2:57369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103045s
	[INFO] 10.244.0.4:37219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243853s
	[INFO] 10.244.0.4:39054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129011s
	[INFO] 10.244.1.2:38863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321539s
	[INFO] 10.244.1.2:42772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125764s
	[INFO] 10.244.1.2:50426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114767s
	[INFO] 10.244.2.2:48400 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140476s
	[INFO] 10.244.2.2:47852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177728s
	[INFO] 10.244.2.2:44657 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185799s
	
	
	==> describe nodes <==
	Name:               ha-866665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:11:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    ha-866665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3eab3c085e414bb06a8b946d23d263
	  System UUID:                3e3eab3c-085e-414b-b06a-8b946d23d263
	  Boot ID:                    67c0c773-5540-4e63-8171-6ccf807dc545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-82knb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-5dd5756b68-mgthb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 coredns-5dd5756b68-r57px             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m22s
	  kube-system                 etcd-ha-866665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 kindnet-9nvvx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m22s
	  kube-system                 kube-apiserver-ha-866665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-controller-manager-ha-866665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-proxy-sbxgg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-scheduler-ha-866665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-vip-ha-866665                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s  kubelet          Node ha-866665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s  kubelet          Node ha-866665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s  kubelet          Node ha-866665 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal  NodeReady                6m16s  kubelet          Node ha-866665 status is now: NodeReady
	  Normal  RegisteredNode           4m50s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal  RegisteredNode           3m35s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	
	
	Name:               ha-866665-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:12:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:15:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-866665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 58bd1411345f4ad89979a7572186fe49
	  System UUID:                58bd1411-345f-4ad8-9979-a7572186fe49
	  Boot ID:                    8f53b4f7-489b-4d2a-a47e-7995a970d46a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sdxnc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-866665-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-26vqf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m18s
	  kube-system                 kube-apiserver-ha-866665-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-ha-866665-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-lqzk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-scheduler-ha-866665-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-866665-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m58s  kube-proxy       
	  Normal  RegisteredNode  5m18s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode  4m50s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode  3m35s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  NodeNotReady    105s   node-controller  Node ha-866665-m02 status is now: NodeNotReady
	
	
	Name:               ha-866665-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_13_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:13:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-866665-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 051bb833ce1b410da5218cd79b3897d3
	  System UUID:                051bb833-ce1b-410d-a521-8cd79b3897d3
	  Boot ID:                    2de5ba98-3540-4b1b-869e-455fabb0f5a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-xc5x4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-866665-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-qr9qm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m53s
	  kube-system                 kube-apiserver-ha-866665-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-ha-866665-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-proxy-6wxfg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-scheduler-ha-866665-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-vip-ha-866665-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m50s  kube-proxy       
	  Normal  RegisteredNode  3m53s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal  RegisteredNode  3m50s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal  RegisteredNode  3m35s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	
	
	Name:               ha-866665-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_14_48_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:14:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-866665-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba1c60db84af4e62b4dd3481111e694e
	  System UUID:                ba1c60db-84af-4e62-b4dd-3481111e694e
	  Boot ID:                    0376ead4-1240-436a-b9a9-8b12bb4d45e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j2vlf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-bq6md    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x5 over 2m53s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x5 over 2m53s)  kubelet          Node ha-866665-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x5 over 2m53s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-866665-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar15 06:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053149] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.657842] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.630134] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.215570] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054962] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.193593] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.117038] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.245141] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.806127] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059748] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.159068] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.996795] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:11] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435] <==
	{"level":"warn","ts":"2024-03-15T06:17:39.166761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.183287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.219561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.234979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.257109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.266305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.282125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.295489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.33494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.338827Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.351141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.358597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.365161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.368782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.369045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.372385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.384723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.393637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.401391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.407131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.411794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.418616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.425167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.436755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:17:39.466829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 06:17:39 up 7 min,  0 users,  load average: 0.22, 0.36, 0.20
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3] <==
	I0315 06:17:06.429013       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:17:16.451359       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:17:16.451665       1 main.go:227] handling current node
	I0315 06:17:16.451790       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:17:16.451813       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:17:16.452133       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:17:16.452216       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:17:16.452469       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:17:16.452560       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:17:26.463277       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:17:26.463297       1 main.go:227] handling current node
	I0315 06:17:26.463306       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:17:26.463310       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:17:26.463439       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:17:26.463466       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:17:26.463528       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:17:26.463564       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:17:36.494904       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:17:36.495026       1 main.go:227] handling current node
	I0315 06:17:36.495059       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:17:36.495091       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:17:36.495330       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:17:36.495370       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:17:36.495439       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:17:36.495457       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551] <==
	I0315 06:12:36.017007       1 trace.go:236] Trace[2044672862]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6a85415a-27a0-4bcf-95ba-7853fcf32943,client:192.168.39.27,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:30.425) (total time: 5591ms):
	Trace[2044672862]: ["Create etcd3" audit-id:6a85415a-27a0-4bcf-95ba-7853fcf32943,key:/events/kube-system/kube-vip-ha-866665-m02.17bcdb5b4183cbe5,type:*core.Event,resource:events 5591ms (06:12:30.425)
	Trace[2044672862]:  ---"Txn call succeeded" 5591ms (06:12:36.016)]
	Trace[2044672862]: [5.591828817s] [5.591828817s] END
	I0315 06:12:36.020054       1 trace.go:236] Trace[1564094206]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f9dbe11d-a229-475c-86d6-bddbaa84ba10,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-vzzt5p77xnbzxty72rxwpkluua,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (15-Mar-2024 06:12:30.916) (total time: 5103ms):
	Trace[1564094206]: ["GuaranteedUpdate etcd3" audit-id:f9dbe11d-a229-475c-86d6-bddbaa84ba10,key:/leases/kube-system/apiserver-vzzt5p77xnbzxty72rxwpkluua,type:*coordination.Lease,resource:leases.coordination.k8s.io 5103ms (06:12:30.916)
	Trace[1564094206]:  ---"Txn call completed" 5102ms (06:12:36.019)]
	Trace[1564094206]: [5.10316225s] [5.10316225s] END
	I0315 06:12:36.021281       1 trace.go:236] Trace[517327827]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4ba017d7-8d46-473a-9b10-9c0c7c6551c9,client:192.168.39.78,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-866665-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (15-Mar-2024 06:12:31.993) (total time: 4027ms):
	Trace[517327827]: ["GuaranteedUpdate etcd3" audit-id:4ba017d7-8d46-473a-9b10-9c0c7c6551c9,key:/minions/ha-866665-m02,type:*core.Node,resource:nodes 4027ms (06:12:31.993)
	Trace[517327827]:  ---"Txn call completed" 4022ms (06:12:36.019)]
	Trace[517327827]: ---"About to apply patch" 4023ms (06:12:36.019)
	Trace[517327827]: [4.027362454s] [4.027362454s] END
	I0315 06:12:36.081062       1 trace.go:236] Trace[957907163]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d1fa324d-4ad7-43e8-a882-57dbc52cba26,client:192.168.39.27,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-866665-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (15-Mar-2024 06:12:31.791) (total time: 4288ms):
	Trace[957907163]: ["GuaranteedUpdate etcd3" audit-id:d1fa324d-4ad7-43e8-a882-57dbc52cba26,key:/minions/ha-866665-m02,type:*core.Node,resource:nodes 4282ms (06:12:31.798)
	Trace[957907163]:  ---"Txn call completed" 4217ms (06:12:36.017)
	Trace[957907163]:  ---"Txn call completed" 59ms (06:12:36.079)]
	Trace[957907163]: ---"About to apply patch" 4217ms (06:12:36.017)
	Trace[957907163]: ---"Object stored in database" 61ms (06:12:36.079)
	Trace[957907163]: [4.288548607s] [4.288548607s] END
	I0315 06:12:36.081649       1 trace.go:236] Trace[79545702]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ec3d2183-acde-4229-b714-66b115ad792f,client:192.168.39.27,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:31.060) (total time: 5021ms):
	Trace[79545702]: [5.021096301s] [5.021096301s] END
	I0315 06:12:36.084320       1 trace.go:236] Trace[186789167]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4835e2d2-3441-4e8f-8963-a052fe415079,client:192.168.39.27,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:30.058) (total time: 6025ms):
	Trace[186789167]: [6.025711189s] [6.025711189s] END
	W0315 06:15:25.721641       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.78 192.168.39.89]
	
	
	==> kube-controller-manager [dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323] <==
	I0315 06:14:10.059049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.382995ms"
	I0315 06:14:10.059756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="148.537µs"
	I0315 06:14:10.167830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="27.291048ms"
	I0315 06:14:10.168892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="167.027µs"
	I0315 06:14:13.455744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.11689ms"
	I0315 06:14:13.455900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.265µs"
	I0315 06:14:13.485160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.262346ms"
	I0315 06:14:13.485357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.298µs"
	I0315 06:14:13.577372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.001517ms"
	I0315 06:14:13.577700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.583µs"
	E0315 06:14:46.312438       1 certificate_controller.go:146] Sync csr-dzhd4 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dzhd4": the object has been modified; please apply your changes to the latest version and try again
	I0315 06:14:47.806996       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-866665-m04\" does not exist"
	I0315 06:14:47.835105       1 range_allocator.go:380] "Set node PodCIDR" node="ha-866665-m04" podCIDRs=["10.244.3.0/24"]
	I0315 06:14:47.859740       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j2vlf"
	I0315 06:14:47.859799       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bq6md"
	I0315 06:14:47.959585       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-626tb"
	I0315 06:14:47.974350       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-cx2hs"
	I0315 06:14:48.070328       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qhf9w"
	I0315 06:14:48.093918       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hhpn4"
	I0315 06:14:51.961094       1 event.go:307] "Event occurred" object="ha-866665-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller"
	I0315 06:14:51.992940       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665-m04"
	I0315 06:14:57.303712       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-866665-m04"
	I0315 06:15:54.074306       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-866665-m04"
	I0315 06:15:54.221006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.676896ms"
	I0315 06:15:54.221352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="196.406µs"
	
	
	==> kube-proxy [c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0] <==
	I0315 06:11:18.764572       1 server_others.go:69] "Using iptables proxy"
	I0315 06:11:18.841281       1 node.go:141] Successfully retrieved node IP: 192.168.39.78
	I0315 06:11:18.911950       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:11:18.912019       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:11:18.915281       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:11:18.915456       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:11:18.916163       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:11:18.916369       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:11:18.919735       1 config.go:188] "Starting service config controller"
	I0315 06:11:18.923685       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:11:18.921381       1 config.go:315] "Starting node config controller"
	I0315 06:11:18.923886       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:11:18.923178       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:11:18.926044       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:11:19.024431       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:11:19.027043       1 shared_informer.go:318] Caches are synced for node config
	I0315 06:11:19.027170       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3] <==
	W0315 06:11:03.233416       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:11:03.233558       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:11:03.291505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:11:03.291601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:11:03.379668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:11:03.379771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:11:03.429320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:11:03.429369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:11:03.470464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:11:03.470509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:11:03.490574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 06:11:03.490720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 06:11:03.581373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:11:03.581558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0315 06:11:06.508617       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 06:13:46.942994       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qr9qm\": pod kindnet-qr9qm is already assigned to node \"ha-866665-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qr9qm" node="ha-866665-m03"
	E0315 06:13:46.943139       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod bd816497-5a8b-4028-9fa5-d4f5739b651e(kube-system/kindnet-qr9qm) wasn't assumed so cannot be forgotten"
	E0315 06:13:46.943288       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qr9qm\": pod kindnet-qr9qm is already assigned to node \"ha-866665-m03\"" pod="kube-system/kindnet-qr9qm"
	I0315 06:13:46.943361       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qr9qm" node="ha-866665-m03"
	E0315 06:13:47.029446       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gwtb2\": pod kindnet-gwtb2 is already assigned to node \"ha-866665-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gwtb2" node="ha-866665-m03"
	E0315 06:13:47.029544       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gwtb2\": pod kindnet-gwtb2 is already assigned to node \"ha-866665-m03\"" pod="kube-system/kindnet-gwtb2"
	E0315 06:14:09.662146       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sdxnc\": pod busybox-5b5d89c9d6-sdxnc is already assigned to node \"ha-866665-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-sdxnc" node="ha-866665-m02"
	E0315 06:14:09.663772       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 48cca13d-39b1-40db-9f6c-1bff9b794de9(default/busybox-5b5d89c9d6-sdxnc) wasn't assumed so cannot be forgotten"
	E0315 06:14:09.664034       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sdxnc\": pod busybox-5b5d89c9d6-sdxnc is already assigned to node \"ha-866665-m02\"" pod="default/busybox-5b5d89c9d6-sdxnc"
	I0315 06:14:09.664892       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-sdxnc" node="ha-866665-m02"
	
	
	==> kubelet <==
	Mar 15 06:13:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:13:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:13:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:14:05 ha-866665 kubelet[1369]: E0315 06:14:05.568533    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:14:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:14:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:14:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:14:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:14:09 ha-866665 kubelet[1369]: I0315 06:14:09.639426    1369 topology_manager.go:215] "Topology Admit Handler" podUID="c12d72ab-189f-4a4a-a7df-54e10184a9ac" podNamespace="default" podName="busybox-5b5d89c9d6-82knb"
	Mar 15 06:14:09 ha-866665 kubelet[1369]: I0315 06:14:09.809772    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkn2\" (UniqueName: \"kubernetes.io/projected/c12d72ab-189f-4a4a-a7df-54e10184a9ac-kube-api-access-dbkn2\") pod \"busybox-5b5d89c9d6-82knb\" (UID: \"c12d72ab-189f-4a4a-a7df-54e10184a9ac\") " pod="default/busybox-5b5d89c9d6-82knb"
	Mar 15 06:15:05 ha-866665 kubelet[1369]: E0315 06:15:05.569763    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:15:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:16:05 ha-866665 kubelet[1369]: E0315 06:16:05.567725    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:16:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:17:05 ha-866665 kubelet[1369]: E0315 06:17:05.567373    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:17:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:261: (dbg) Run:  kubectl --context ha-866665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (3.188893549s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:17:44.088631   29498 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:17:44.088916   29498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:44.088927   29498 out.go:304] Setting ErrFile to fd 2...
	I0315 06:17:44.088935   29498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:44.089546   29498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:17:44.089884   29498 out.go:298] Setting JSON to false
	I0315 06:17:44.089963   29498 mustload.go:65] Loading cluster: ha-866665
	I0315 06:17:44.090025   29498 notify.go:220] Checking for updates...
	I0315 06:17:44.090671   29498 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:17:44.090693   29498 status.go:255] checking status of ha-866665 ...
	I0315 06:17:44.091127   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.091193   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.105599   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36169
	I0315 06:17:44.105993   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.106601   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.106629   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.106971   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.107181   29498 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:17:44.108629   29498 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:17:44.108644   29498 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:44.108982   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.109039   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.123234   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39669
	I0315 06:17:44.123630   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.124094   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.124114   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.124366   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.124544   29498 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:17:44.127179   29498 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:44.127638   29498 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:44.127670   29498 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:44.127829   29498 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:44.128206   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.128242   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.142815   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0315 06:17:44.143187   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.143627   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.143649   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.143992   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.144176   29498 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:17:44.144349   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:44.144380   29498 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:17:44.147099   29498 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:44.147498   29498 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:44.147521   29498 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:44.147644   29498 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:17:44.147798   29498 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:17:44.147932   29498 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:17:44.148063   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:17:44.225091   29498 ssh_runner.go:195] Run: systemctl --version
	I0315 06:17:44.231914   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:44.247221   29498 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:44.247248   29498 api_server.go:166] Checking apiserver status ...
	I0315 06:17:44.247280   29498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:44.265254   29498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:17:44.275801   29498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:44.275871   29498 ssh_runner.go:195] Run: ls
	I0315 06:17:44.280256   29498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:44.285031   29498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:44.285050   29498 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:17:44.285060   29498 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:44.285081   29498 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:17:44.285359   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.285392   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.299940   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0315 06:17:44.300367   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.300849   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.300871   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.301223   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.301418   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:17:44.302943   29498 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:17:44.302958   29498 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:44.303274   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.303310   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.318839   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0315 06:17:44.319396   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.319931   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.319949   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.320266   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.320458   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:17:44.323339   29498 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:44.323791   29498 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:44.323813   29498 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:44.324020   29498 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:44.324286   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:44.324331   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:44.338927   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35993
	I0315 06:17:44.339275   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:44.339752   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:44.339773   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:44.340109   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:44.340288   29498 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:17:44.340501   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:44.340525   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:17:44.343177   29498 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:44.343568   29498 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:44.343593   29498 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:44.343733   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:17:44.343907   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:17:44.344049   29498 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:17:44.344182   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:17:46.852836   29498 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:46.852936   29498 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:17:46.852960   29498 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:46.852971   29498 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:17:46.853003   29498 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:46.853016   29498 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:17:46.853464   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:46.853529   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:46.868742   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0315 06:17:46.869151   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:46.869688   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:46.869715   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:46.870064   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:46.870246   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:17:46.871870   29498 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:17:46.871897   29498 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:46.872174   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:46.872216   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:46.887271   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0315 06:17:46.887743   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:46.888230   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:46.888258   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:46.888575   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:46.888752   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:17:46.891482   29498 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:46.891908   29498 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:46.891926   29498 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:46.892065   29498 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:46.892366   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:46.892398   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:46.907020   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I0315 06:17:46.907402   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:46.907852   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:46.907873   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:46.908175   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:46.908355   29498 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:17:46.908567   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:46.908586   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:17:46.911408   29498 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:46.911900   29498 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:46.911939   29498 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:46.912148   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:17:46.912331   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:17:46.912505   29498 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:17:46.912642   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:17:46.996340   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:47.013442   29498 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:47.013468   29498 api_server.go:166] Checking apiserver status ...
	I0315 06:17:47.013510   29498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:47.029493   29498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:17:47.040737   29498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:47.040825   29498 ssh_runner.go:195] Run: ls
	I0315 06:17:47.046123   29498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:47.053224   29498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:47.053249   29498 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:17:47.053258   29498 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:47.053271   29498 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:17:47.053601   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:47.053634   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:47.069407   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0315 06:17:47.069875   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:47.070314   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:47.070338   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:47.070633   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:47.070802   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:17:47.072398   29498 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:17:47.072412   29498 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:47.072752   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:47.072791   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:47.088620   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0315 06:17:47.089107   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:47.089738   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:47.089760   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:47.090098   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:47.090289   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:17:47.093047   29498 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:47.093516   29498 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:47.093549   29498 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:47.093693   29498 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:47.094094   29498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:47.094139   29498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:47.108801   29498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0315 06:17:47.109282   29498 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:47.109810   29498 main.go:141] libmachine: Using API Version  1
	I0315 06:17:47.109833   29498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:47.110139   29498 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:47.110312   29498 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:17:47.110485   29498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:47.110508   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:17:47.113357   29498 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:47.113915   29498 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:47.113940   29498 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:47.114111   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:17:47.114287   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:17:47.114431   29498 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:17:47.114581   29498 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:17:47.197962   29498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:47.216661   29498 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (5.346627031s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:17:48.084225   29593 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:17:48.084321   29593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:48.084329   29593 out.go:304] Setting ErrFile to fd 2...
	I0315 06:17:48.084336   29593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:48.084540   29593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:17:48.084689   29593 out.go:298] Setting JSON to false
	I0315 06:17:48.084713   29593 mustload.go:65] Loading cluster: ha-866665
	I0315 06:17:48.084836   29593 notify.go:220] Checking for updates...
	I0315 06:17:48.085095   29593 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:17:48.085109   29593 status.go:255] checking status of ha-866665 ...
	I0315 06:17:48.085504   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.085561   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.101712   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I0315 06:17:48.102165   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.102689   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.102711   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.103133   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.103337   29593 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:17:48.105394   29593 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:17:48.105413   29593 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:48.105700   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.105752   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.121414   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
	I0315 06:17:48.121888   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.122290   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.122310   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.122686   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.122862   29593 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:17:48.125820   29593 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:48.126341   29593 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:48.126371   29593 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:48.126563   29593 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:48.126842   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.126885   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.142263   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0315 06:17:48.142897   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.143505   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.143534   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.143951   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.144164   29593 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:17:48.144442   29593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:48.144500   29593 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:17:48.147571   29593 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:48.148003   29593 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:48.148039   29593 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:48.148168   29593 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:17:48.148343   29593 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:17:48.148503   29593 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:17:48.148638   29593 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:17:48.225137   29593 ssh_runner.go:195] Run: systemctl --version
	I0315 06:17:48.231498   29593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:48.245998   29593 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:48.246047   29593 api_server.go:166] Checking apiserver status ...
	I0315 06:17:48.246100   29593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:48.262102   29593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:17:48.271938   29593 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:48.272000   29593 ssh_runner.go:195] Run: ls
	I0315 06:17:48.276627   29593 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:48.283450   29593 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:48.283474   29593 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:17:48.283484   29593 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:48.283504   29593 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:17:48.283790   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.283827   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.299602   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
	I0315 06:17:48.299991   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.300489   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.300514   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.300824   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.301038   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:17:48.302798   29593 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:17:48.302836   29593 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:48.303253   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.303290   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.317844   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I0315 06:17:48.318254   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.318711   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.318738   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.319084   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.319244   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:17:48.322734   29593 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:48.323251   29593 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:48.323287   29593 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:48.323494   29593 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:48.323789   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:48.323843   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:48.340681   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0315 06:17:48.341228   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:48.341756   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:48.341775   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:48.342101   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:48.342305   29593 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:17:48.342503   29593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:48.342525   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:17:48.345525   29593 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:48.345985   29593 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:48.346014   29593 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:48.346192   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:17:48.346373   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:17:48.346567   29593 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:17:48.346753   29593 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:17:49.924732   29593 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:49.924802   29593 retry.go:31] will retry after 145.344396ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:52.996734   29593 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:52.996823   29593 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:17:52.996846   29593 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:52.996877   29593 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:17:52.996914   29593 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:52.996929   29593 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:17:52.997222   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:52.997273   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.012513   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44055
	I0315 06:17:53.012975   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.013678   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.013705   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.014066   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.014283   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:17:53.016122   29593 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:17:53.016141   29593 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:53.016526   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:53.016570   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.031680   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34051
	I0315 06:17:53.032126   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.032691   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.032717   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.033039   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.033272   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:17:53.036908   29593 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:53.037477   29593 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:53.037507   29593 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:53.037859   29593 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:53.038190   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:53.038230   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.052909   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0315 06:17:53.053370   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.053910   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.053942   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.054410   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.054594   29593 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:17:53.054903   29593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:53.054923   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:17:53.057839   29593 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:53.058212   29593 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:53.058247   29593 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:53.058416   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:17:53.058610   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:17:53.058837   29593 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:17:53.059027   29593 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:17:53.151627   29593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:53.169971   29593 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:53.169999   29593 api_server.go:166] Checking apiserver status ...
	I0315 06:17:53.170041   29593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:53.187469   29593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:17:53.198765   29593 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:53.198822   29593 ssh_runner.go:195] Run: ls
	I0315 06:17:53.203984   29593 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:53.211326   29593 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:53.211361   29593 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:17:53.211372   29593 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:53.211412   29593 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:17:53.211725   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:53.211816   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.226775   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0315 06:17:53.227320   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.227863   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.227884   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.228195   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.228388   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:17:53.230230   29593 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:17:53.230249   29593 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:53.230516   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:53.230574   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.245595   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0315 06:17:53.246030   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.246506   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.246530   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.246896   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.247108   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:17:53.250098   29593 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:53.250578   29593 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:53.250607   29593 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:53.250777   29593 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:53.251102   29593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:53.251139   29593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:53.266274   29593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0315 06:17:53.266770   29593 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:53.267293   29593 main.go:141] libmachine: Using API Version  1
	I0315 06:17:53.267315   29593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:53.267698   29593 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:53.267962   29593 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:17:53.268165   29593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:53.268183   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:17:53.271258   29593 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:53.271719   29593 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:53.271754   29593 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:53.271923   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:17:53.272142   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:17:53.272325   29593 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:17:53.272559   29593 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:17:53.356803   29593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:53.372753   29593 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (4.477700439s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:17:55.439290   29688 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:17:55.439457   29688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:55.439469   29688 out.go:304] Setting ErrFile to fd 2...
	I0315 06:17:55.439475   29688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:17:55.439762   29688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:17:55.440000   29688 out.go:298] Setting JSON to false
	I0315 06:17:55.440033   29688 mustload.go:65] Loading cluster: ha-866665
	I0315 06:17:55.440146   29688 notify.go:220] Checking for updates...
	I0315 06:17:55.440575   29688 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:17:55.440595   29688 status.go:255] checking status of ha-866665 ...
	I0315 06:17:55.440995   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.441069   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.456793   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0315 06:17:55.457229   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.457761   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.457792   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.458203   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.458433   29688 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:17:55.459967   29688 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:17:55.459993   29688 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:55.460390   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.460438   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.475512   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0315 06:17:55.475941   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.476452   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.476505   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.476815   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.477025   29688 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:17:55.479741   29688 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:55.480197   29688 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:55.480231   29688 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:55.480349   29688 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:17:55.480699   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.480782   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.496892   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0315 06:17:55.497335   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.497785   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.497802   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.498128   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.498337   29688 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:17:55.498527   29688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:55.498555   29688 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:17:55.501644   29688 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:55.502228   29688 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:17:55.502257   29688 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:17:55.502380   29688 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:17:55.502555   29688 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:17:55.502744   29688 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:17:55.502960   29688 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:17:55.584482   29688 ssh_runner.go:195] Run: systemctl --version
	I0315 06:17:55.591546   29688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:55.609746   29688 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:55.609776   29688 api_server.go:166] Checking apiserver status ...
	I0315 06:17:55.609816   29688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:55.626748   29688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:17:55.639697   29688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:55.639755   29688 ssh_runner.go:195] Run: ls
	I0315 06:17:55.644965   29688 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:55.649591   29688 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:55.649614   29688 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:17:55.649636   29688 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:55.649650   29688 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:17:55.649944   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.649990   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.664875   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44435
	I0315 06:17:55.665332   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.665889   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.665917   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.666256   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.666419   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:17:55.668053   29688 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:17:55.668069   29688 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:55.668397   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.668432   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.683915   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41415
	I0315 06:17:55.684340   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.684910   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.684934   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.685221   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.685435   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:17:55.688189   29688 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:55.688747   29688 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:55.688767   29688 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:55.688920   29688 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:17:55.689256   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:55.689290   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:55.705485   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0315 06:17:55.705999   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:55.706508   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:55.706530   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:55.706865   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:55.707055   29688 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:17:55.707255   29688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:55.707274   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:17:55.710058   29688 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:55.710420   29688 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:17:55.710455   29688 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:17:55.710620   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:17:55.710829   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:17:55.710972   29688 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:17:55.711102   29688 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:17:56.068810   29688 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:56.068853   29688 retry.go:31] will retry after 357.692323ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:59.492725   29688 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:17:59.492809   29688 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:17:59.492832   29688 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:59.492846   29688 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:17:59.492885   29688 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:17:59.492899   29688 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:17:59.493222   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.493300   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.509687   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40793
	I0315 06:17:59.510231   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.510769   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.510791   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.511093   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.511316   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:17:59.512922   29688 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:17:59.512940   29688 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:59.513241   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.513282   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.528772   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34257
	I0315 06:17:59.529237   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.529775   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.529798   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.530190   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.530429   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:17:59.533335   29688 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:59.533941   29688 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:59.533970   29688 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:59.534128   29688 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:17:59.534429   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.534480   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.550265   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0315 06:17:59.550671   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.551105   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.551127   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.551429   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.551608   29688 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:17:59.551795   29688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:59.551817   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:17:59.554691   29688 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:59.555130   29688 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:17:59.555162   29688 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:17:59.555380   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:17:59.555540   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:17:59.555692   29688 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:17:59.555819   29688 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:17:59.641310   29688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:59.659557   29688 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:17:59.659583   29688 api_server.go:166] Checking apiserver status ...
	I0315 06:17:59.659613   29688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:17:59.675047   29688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:17:59.686173   29688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:17:59.686231   29688 ssh_runner.go:195] Run: ls
	I0315 06:17:59.690773   29688 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:17:59.697372   29688 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:17:59.697400   29688 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:17:59.697412   29688 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:17:59.697432   29688 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:17:59.697795   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.697830   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.713667   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0315 06:17:59.714181   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.714767   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.714800   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.715099   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.715257   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:17:59.716786   29688 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:17:59.716805   29688 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:59.717084   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.717118   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.731595   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0315 06:17:59.732114   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.732701   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.732727   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.733038   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.733245   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:17:59.736107   29688 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:59.736639   29688 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:59.736734   29688 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:59.736826   29688 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:17:59.737119   29688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:17:59.737159   29688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:17:59.751874   29688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I0315 06:17:59.752383   29688 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:17:59.752882   29688 main.go:141] libmachine: Using API Version  1
	I0315 06:17:59.752904   29688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:17:59.753249   29688 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:17:59.753430   29688 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:17:59.753634   29688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:17:59.753651   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:17:59.756729   29688 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:59.757116   29688 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:17:59.757144   29688 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:17:59.757362   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:17:59.757505   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:17:59.757674   29688 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:17:59.757815   29688 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:17:59.845142   29688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:17:59.859955   29688 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (4.680106605s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:01.605303   29797 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:01.605455   29797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:01.605469   29797 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:01.605475   29797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:01.605691   29797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:01.605862   29797 out.go:298] Setting JSON to false
	I0315 06:18:01.605889   29797 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:01.605946   29797 notify.go:220] Checking for updates...
	I0315 06:18:01.606235   29797 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:01.606247   29797 status.go:255] checking status of ha-866665 ...
	I0315 06:18:01.606686   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.606752   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.625643   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0315 06:18:01.626058   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.626625   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.626649   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.626955   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.627117   29797 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:18:01.628859   29797 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:18:01.628889   29797 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:01.629136   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.629166   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.643515   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0315 06:18:01.643908   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.644543   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.644574   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.644933   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.645128   29797 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:18:01.648152   29797 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:01.648612   29797 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:01.648640   29797 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:01.648834   29797 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:01.649103   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.649154   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.663645   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0315 06:18:01.664015   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.664421   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.664442   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.664806   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.664969   29797 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:18:01.665161   29797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:01.665186   29797 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:18:01.668144   29797 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:01.668520   29797 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:01.668548   29797 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:01.668690   29797 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:18:01.668849   29797 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:18:01.668992   29797 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:18:01.669138   29797 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:18:01.752699   29797 ssh_runner.go:195] Run: systemctl --version
	I0315 06:18:01.759515   29797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:01.782720   29797 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:01.782756   29797 api_server.go:166] Checking apiserver status ...
	I0315 06:18:01.782800   29797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:01.799153   29797 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:18:01.809837   29797 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:01.809887   29797 ssh_runner.go:195] Run: ls
	I0315 06:18:01.814859   29797 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:01.819603   29797 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:01.819624   29797 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:18:01.819633   29797 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:01.819649   29797 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:18:01.819944   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.819975   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.834820   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0315 06:18:01.835239   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.835685   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.835708   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.836020   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.836216   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:18:01.837878   29797 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:18:01.837896   29797 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:01.838298   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.838330   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.852697   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0315 06:18:01.853132   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.853628   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.853652   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.853970   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.854151   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:18:01.856800   29797 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:01.857236   29797 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:01.857254   29797 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:01.857415   29797 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:01.857765   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:01.857830   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:01.873649   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0315 06:18:01.874028   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:01.874493   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:01.874515   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:01.874778   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:01.874953   29797 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:18:01.875125   29797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:01.875145   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:18:01.877823   29797 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:01.878281   29797 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:01.878310   29797 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:01.878409   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:18:01.878601   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:18:01.878761   29797 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:18:01.878937   29797 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:18:02.564746   29797 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:02.564800   29797 retry.go:31] will retry after 227.617521ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:18:05.860764   29797 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:18:05.860853   29797 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:18:05.860876   29797 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:05.860909   29797 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:18:05.860929   29797 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:05.860939   29797 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:18:05.861372   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:05.861422   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:05.876053   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0315 06:18:05.876574   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:05.877075   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:05.877100   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:05.877440   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:05.877665   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:05.879234   29797 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:18:05.879249   29797 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:05.879562   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:05.879609   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:05.894374   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0315 06:18:05.894799   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:05.895212   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:05.895233   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:05.895576   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:05.895749   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:18:05.898685   29797 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:05.899107   29797 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:05.899140   29797 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:05.899326   29797 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:05.900116   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:05.900156   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:05.917193   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0315 06:18:05.917584   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:05.918075   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:05.918107   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:05.918485   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:05.918717   29797 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:05.918942   29797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:05.918967   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:05.922116   29797 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:05.922592   29797 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:05.922624   29797 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:05.922762   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:05.922909   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:05.923046   29797 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:05.923212   29797 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:06.009786   29797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:06.026751   29797 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:06.026778   29797 api_server.go:166] Checking apiserver status ...
	I0315 06:18:06.026808   29797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:06.041954   29797 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:18:06.052643   29797 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:06.052710   29797 ssh_runner.go:195] Run: ls
	I0315 06:18:06.057388   29797 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:06.062639   29797 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:06.062663   29797 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:18:06.062671   29797 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:06.062686   29797 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:18:06.062945   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:06.062980   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:06.078778   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0315 06:18:06.079231   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:06.079748   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:06.079769   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:06.080162   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:06.080360   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:06.082095   29797 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:18:06.082112   29797 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:06.082427   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:06.082471   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:06.098209   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0315 06:18:06.098611   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:06.099031   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:06.099053   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:06.099359   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:06.099640   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:18:06.102460   29797 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:06.102832   29797 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:06.102865   29797 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:06.103030   29797 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:06.103421   29797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:06.103463   29797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:06.118225   29797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32955
	I0315 06:18:06.118663   29797 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:06.119106   29797 main.go:141] libmachine: Using API Version  1
	I0315 06:18:06.119121   29797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:06.119436   29797 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:06.119673   29797 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:06.119931   29797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:06.119957   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:06.122826   29797 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:06.123340   29797 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:06.123364   29797 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:06.123523   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:06.123704   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:06.123884   29797 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:06.124052   29797 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:06.208315   29797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:06.226633   29797 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (4.237408059s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:08.389555   29891 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:08.389693   29891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:08.389704   29891 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:08.389711   29891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:08.389880   29891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:08.390063   29891 out.go:298] Setting JSON to false
	I0315 06:18:08.390087   29891 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:08.390142   29891 notify.go:220] Checking for updates...
	I0315 06:18:08.390421   29891 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:08.390434   29891 status.go:255] checking status of ha-866665 ...
	I0315 06:18:08.390791   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.390849   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.410244   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0315 06:18:08.410625   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.411280   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.411322   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.411755   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.412012   29891 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:18:08.413705   29891 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:18:08.413721   29891 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:08.414032   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.414080   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.430624   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0315 06:18:08.431026   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.431493   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.431517   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.431853   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.432044   29891 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:18:08.435027   29891 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:08.435421   29891 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:08.435453   29891 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:08.435628   29891 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:08.435915   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.435948   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.451609   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0315 06:18:08.452032   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.452451   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.452501   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.452823   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.453023   29891 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:18:08.453257   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:08.453293   29891 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:18:08.456294   29891 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:08.456821   29891 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:08.456862   29891 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:08.456985   29891 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:18:08.457142   29891 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:18:08.457296   29891 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:18:08.457428   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:18:08.536371   29891 ssh_runner.go:195] Run: systemctl --version
	I0315 06:18:08.545352   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:08.561467   29891 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:08.561496   29891 api_server.go:166] Checking apiserver status ...
	I0315 06:18:08.561545   29891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:08.580059   29891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:18:08.592487   29891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:08.592574   29891 ssh_runner.go:195] Run: ls
	I0315 06:18:08.597962   29891 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:08.607759   29891 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:08.607788   29891 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:18:08.607798   29891 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:08.607817   29891 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:18:08.608315   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.608370   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.625633   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42373
	I0315 06:18:08.626204   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.626751   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.626767   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.627058   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.627242   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:18:08.629147   29891 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:18:08.629162   29891 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:08.629435   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.629468   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.644653   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0315 06:18:08.645159   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.645619   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.645643   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.645919   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.646056   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:18:08.648920   29891 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:08.649355   29891 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:08.649376   29891 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:08.649527   29891 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:08.649868   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:08.649904   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:08.666442   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0315 06:18:08.666871   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:08.667344   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:08.667366   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:08.667678   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:08.667912   29891 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:18:08.668083   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:08.668098   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:18:08.670992   29891 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:08.671417   29891 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:08.671438   29891 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:08.671588   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:18:08.671789   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:18:08.671899   29891 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:18:08.672033   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:18:08.932705   29891 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:08.932746   29891 retry.go:31] will retry after 208.002773ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:18:12.196704   29891 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:18:12.196806   29891 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:18:12.196827   29891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:12.196835   29891 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:18:12.196858   29891 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:12.196868   29891 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:18:12.197171   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.197210   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.215051   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0315 06:18:12.215522   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.215970   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.215992   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.216321   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.216530   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:12.218062   29891 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:18:12.218078   29891 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:12.218380   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.218412   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.232563   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0315 06:18:12.233037   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.233574   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.233590   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.233891   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.234076   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:18:12.236768   29891 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:12.237184   29891 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:12.237207   29891 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:12.237348   29891 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:12.237666   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.237698   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.252036   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0315 06:18:12.252499   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.253007   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.253035   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.253329   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.253567   29891 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:12.253851   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:12.253875   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:12.256683   29891 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:12.257145   29891 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:12.257179   29891 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:12.257332   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:12.257519   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:12.257694   29891 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:12.257913   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:12.350991   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:12.366686   29891 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:12.366718   29891 api_server.go:166] Checking apiserver status ...
	I0315 06:18:12.366777   29891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:12.381987   29891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:18:12.393452   29891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:12.393540   29891 ssh_runner.go:195] Run: ls
	I0315 06:18:12.398608   29891 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:12.406140   29891 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:12.406168   29891 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:18:12.406181   29891 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:12.406218   29891 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:18:12.406599   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.406647   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.422782   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0315 06:18:12.423235   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.423657   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.423671   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.424011   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.424210   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:12.426106   29891 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:18:12.426120   29891 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:12.426395   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.426477   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.440821   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I0315 06:18:12.441197   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.441619   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.441648   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.441991   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.442186   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:18:12.445202   29891 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:12.445664   29891 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:12.445687   29891 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:12.445847   29891 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:12.446146   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:12.446187   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:12.460486   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0315 06:18:12.460841   29891 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:12.461282   29891 main.go:141] libmachine: Using API Version  1
	I0315 06:18:12.461304   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:12.461625   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:12.461801   29891 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:12.462010   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:12.462027   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:12.464355   29891 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:12.464719   29891 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:12.464738   29891 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:12.464915   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:12.465081   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:12.465208   29891 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:12.465353   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:12.549309   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:12.566085   29891 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 3 (3.745802212s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:17.108498   29997 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:17.108775   29997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:17.108785   29997 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:17.108790   29997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:17.108980   29997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:17.109142   29997 out.go:298] Setting JSON to false
	I0315 06:18:17.109169   29997 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:17.109289   29997 notify.go:220] Checking for updates...
	I0315 06:18:17.109697   29997 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:17.109717   29997 status.go:255] checking status of ha-866665 ...
	I0315 06:18:17.110242   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.110307   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.126013   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0315 06:18:17.126534   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.127135   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.127159   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.127513   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.127713   29997 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:18:17.129304   29997 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:18:17.129323   29997 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:17.129659   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.129697   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.144853   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I0315 06:18:17.145269   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.145794   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.145829   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.146201   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.146422   29997 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:18:17.149359   29997 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:17.149840   29997 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:17.149867   29997 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:17.150043   29997 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:17.150365   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.150431   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.166208   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0315 06:18:17.166614   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.167066   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.167094   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.167453   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.167653   29997 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:18:17.167884   29997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:17.167912   29997 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:18:17.170646   29997 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:17.171163   29997 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:17.171208   29997 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:17.171359   29997 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:18:17.171556   29997 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:18:17.171705   29997 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:18:17.171816   29997 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:18:17.249610   29997 ssh_runner.go:195] Run: systemctl --version
	I0315 06:18:17.257840   29997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:17.274098   29997 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:17.274131   29997 api_server.go:166] Checking apiserver status ...
	I0315 06:18:17.274174   29997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:17.290443   29997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:18:17.299933   29997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:17.299989   29997 ssh_runner.go:195] Run: ls
	I0315 06:18:17.304689   29997 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:17.311225   29997 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:17.311248   29997 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:18:17.311257   29997 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:17.311272   29997 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:18:17.311597   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.311637   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.326510   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0315 06:18:17.326928   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.327369   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.327386   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.327695   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.327895   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:18:17.329658   29997 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:18:17.329690   29997 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:17.329974   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.330008   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.344493   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0315 06:18:17.344976   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.345426   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.345444   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.345732   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.346052   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:18:17.348592   29997 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:17.349032   29997 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:17.349067   29997 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:17.349253   29997 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:18:17.349539   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:17.349578   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:17.364854   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0315 06:18:17.365241   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:17.365702   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:17.365722   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:17.366050   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:17.366212   29997 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:18:17.366417   29997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:17.366436   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:18:17.368911   29997 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:17.369341   29997 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:18:17.369382   29997 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:18:17.369556   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:18:17.369714   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:18:17.369896   29997 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:18:17.370057   29997 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:18:20.424693   29997 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:18:20.424804   29997 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:18:20.424826   29997 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:20.424840   29997 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:18:20.424865   29997 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:18:20.424879   29997 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:18:20.425190   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.425239   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.441732   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0315 06:18:20.442177   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.442648   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.442673   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.443037   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.443250   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:20.445145   29997 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:18:20.445163   29997 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:20.445509   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.445555   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.461380   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0315 06:18:20.462055   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.462562   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.462585   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.462922   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.463129   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:18:20.466352   29997 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:20.466898   29997 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:20.466922   29997 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:20.467091   29997 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:20.467392   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.467434   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.484812   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0315 06:18:20.485200   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.485643   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.485656   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.485922   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.486072   29997 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:20.486281   29997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:20.486310   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:20.489788   29997 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:20.490291   29997 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:20.490329   29997 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:20.490485   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:20.490690   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:20.490854   29997 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:20.490970   29997 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:20.577052   29997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:20.594494   29997 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:20.594524   29997 api_server.go:166] Checking apiserver status ...
	I0315 06:18:20.594565   29997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:20.611541   29997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:18:20.622452   29997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:20.622532   29997 ssh_runner.go:195] Run: ls
	I0315 06:18:20.627391   29997 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:20.634541   29997 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:20.634570   29997 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:18:20.634580   29997 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:20.634595   29997 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:18:20.634980   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.635024   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.649983   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0315 06:18:20.650462   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.651001   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.651029   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.651348   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.651549   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:20.653725   29997 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:18:20.653742   29997 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:20.654116   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.654192   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.669377   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0315 06:18:20.669838   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.670429   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.670455   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.670808   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.671029   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:18:20.674395   29997 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:20.674759   29997 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:20.674800   29997 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:20.674981   29997 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:20.675487   29997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:20.675526   29997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:20.690635   29997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0315 06:18:20.691118   29997 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:20.691598   29997 main.go:141] libmachine: Using API Version  1
	I0315 06:18:20.691622   29997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:20.691938   29997 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:20.692127   29997 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:20.692306   29997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:20.692338   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:20.695242   29997 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:20.695655   29997 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:20.695679   29997 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:20.695871   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:20.696013   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:20.696259   29997 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:20.696381   29997 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:20.781358   29997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:20.799027   29997 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 7 (650.778732ms)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:26.063569   30125 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:26.063669   30125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:26.063677   30125 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:26.063681   30125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:26.063885   30125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:26.064063   30125 out.go:298] Setting JSON to false
	I0315 06:18:26.064100   30125 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:26.064216   30125 notify.go:220] Checking for updates...
	I0315 06:18:26.064566   30125 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:26.064582   30125 status.go:255] checking status of ha-866665 ...
	I0315 06:18:26.064977   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.065029   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.080969   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42973
	I0315 06:18:26.081446   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.082095   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.082123   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.082517   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.082745   30125 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:18:26.084776   30125 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:18:26.084796   30125 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:26.085072   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.085111   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.101932   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0315 06:18:26.102626   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.103139   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.103164   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.103716   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.103935   30125 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:18:26.107054   30125 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:26.107513   30125 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:26.107570   30125 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:26.107773   30125 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:26.108196   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.108266   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.124304   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32955
	I0315 06:18:26.124708   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.125124   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.125145   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.125432   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.125624   30125 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:18:26.125892   30125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:26.125919   30125 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:18:26.128943   30125 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:26.129407   30125 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:26.129435   30125 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:26.129589   30125 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:18:26.129806   30125 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:18:26.129969   30125 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:18:26.130082   30125 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:18:26.210650   30125 ssh_runner.go:195] Run: systemctl --version
	I0315 06:18:26.220998   30125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:26.239751   30125 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:26.239778   30125 api_server.go:166] Checking apiserver status ...
	I0315 06:18:26.239811   30125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:26.257634   30125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:18:26.269580   30125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:26.269634   30125 ssh_runner.go:195] Run: ls
	I0315 06:18:26.274710   30125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:26.279732   30125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:26.279759   30125 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:18:26.279771   30125 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:26.279797   30125 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:18:26.280156   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.280195   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.295345   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0315 06:18:26.295702   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.296146   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.296174   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.296489   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.296683   30125 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:18:26.298298   30125 status.go:330] ha-866665-m02 host status = "Stopped" (err=<nil>)
	I0315 06:18:26.298314   30125 status.go:343] host is not running, skipping remaining checks
	I0315 06:18:26.298322   30125 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:26.298338   30125 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:18:26.298595   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.298632   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.313315   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0315 06:18:26.313692   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.314115   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.314135   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.314434   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.314605   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:26.316219   30125 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:18:26.316236   30125 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:26.316555   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.316587   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.331043   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0315 06:18:26.331444   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.331837   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.331855   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.332124   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.332268   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:18:26.335041   30125 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:26.335528   30125 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:26.335551   30125 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:26.335772   30125 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:26.336128   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.336203   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.352707   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I0315 06:18:26.353128   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.353624   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.353657   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.354000   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.354233   30125 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:26.354421   30125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:26.354442   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:26.357541   30125 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:26.358027   30125 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:26.358060   30125 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:26.358202   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:26.358395   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:26.358570   30125 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:26.358732   30125 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:26.444680   30125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:26.459976   30125 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:26.460012   30125 api_server.go:166] Checking apiserver status ...
	I0315 06:18:26.460079   30125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:26.475605   30125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:18:26.485698   30125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:26.485761   30125 ssh_runner.go:195] Run: ls
	I0315 06:18:26.493339   30125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:26.498384   30125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:26.498412   30125 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:18:26.498424   30125 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:26.498445   30125 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:18:26.498765   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.498812   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.513397   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0315 06:18:26.513855   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.514416   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.514445   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.514771   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.514983   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:26.516335   30125 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:18:26.516352   30125 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:26.516654   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.516686   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.531018   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0315 06:18:26.531428   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.531902   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.531922   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.532185   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.532352   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:18:26.535141   30125 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:26.535710   30125 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:26.535739   30125 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:26.535937   30125 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:26.536306   30125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:26.536364   30125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:26.551483   30125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 06:18:26.551896   30125 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:26.552407   30125 main.go:141] libmachine: Using API Version  1
	I0315 06:18:26.552452   30125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:26.552769   30125 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:26.552977   30125 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:26.553187   30125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:26.553206   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:26.556140   30125 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:26.556683   30125 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:26.556713   30125 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:26.556870   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:26.557061   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:26.557237   30125 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:26.557380   30125 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:26.640350   30125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:26.656906   30125 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 7 (669.050112ms)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-866665-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:42.701367   30683 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:42.701464   30683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:42.701468   30683 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:42.701472   30683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:42.701682   30683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:42.701867   30683 out.go:298] Setting JSON to false
	I0315 06:18:42.701894   30683 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:42.702058   30683 notify.go:220] Checking for updates...
	I0315 06:18:42.702334   30683 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:42.702354   30683 status.go:255] checking status of ha-866665 ...
	I0315 06:18:42.702803   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.702880   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.720144   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0315 06:18:42.720638   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.721364   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.721399   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.721772   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.722014   30683 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:18:42.724042   30683 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:18:42.724062   30683 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:42.724393   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.724434   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.738994   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0315 06:18:42.739407   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.739904   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.739924   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.740285   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.740538   30683 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:18:42.743349   30683 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:42.743795   30683 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:42.743822   30683 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:42.743966   30683 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:18:42.744359   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.744402   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.759118   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I0315 06:18:42.759528   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.760016   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.760036   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.760370   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.760552   30683 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:18:42.760755   30683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:42.760785   30683 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:18:42.763746   30683 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:42.764246   30683 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:18:42.764272   30683 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:18:42.764441   30683 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:18:42.764626   30683 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:18:42.764769   30683 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:18:42.764905   30683 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:18:42.848101   30683 ssh_runner.go:195] Run: systemctl --version
	I0315 06:18:42.855357   30683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:42.871990   30683 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:42.872016   30683 api_server.go:166] Checking apiserver status ...
	I0315 06:18:42.872053   30683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:42.890562   30683 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0315 06:18:42.904984   30683 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:42.905045   30683 ssh_runner.go:195] Run: ls
	I0315 06:18:42.911441   30683 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:42.919696   30683 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:42.919723   30683 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:18:42.919732   30683 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:42.919749   30683 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:18:42.920049   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.920108   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.935839   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0315 06:18:42.936270   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.936815   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.936857   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.937292   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.937484   30683 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:18:42.939225   30683 status.go:330] ha-866665-m02 host status = "Stopped" (err=<nil>)
	I0315 06:18:42.939238   30683 status.go:343] host is not running, skipping remaining checks
	I0315 06:18:42.939246   30683 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:42.939266   30683 status.go:255] checking status of ha-866665-m03 ...
	I0315 06:18:42.939562   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.939631   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.954044   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0315 06:18:42.954442   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.954857   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.954881   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.955209   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.955425   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:42.957015   30683 status.go:330] ha-866665-m03 host status = "Running" (err=<nil>)
	I0315 06:18:42.957030   30683 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:42.957330   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.957366   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.973602   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0315 06:18:42.974106   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.974670   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.974688   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.975076   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.975273   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:18:42.978262   30683 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:42.978703   30683 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:42.978730   30683 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:42.978954   30683 host.go:66] Checking if "ha-866665-m03" exists ...
	I0315 06:18:42.979303   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:42.979346   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:42.994142   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0315 06:18:42.994596   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:42.995070   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:42.995087   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:42.995405   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:42.995618   30683 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:42.995807   30683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:42.995833   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:42.998856   30683 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:42.999362   30683 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:42.999390   30683 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:42.999531   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:42.999683   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:42.999850   30683 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:42.999980   30683 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:43.091291   30683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:43.109985   30683 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:18:43.110020   30683 api_server.go:166] Checking apiserver status ...
	I0315 06:18:43.110064   30683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:18:43.126353   30683 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0315 06:18:43.138196   30683 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:18:43.138263   30683 ssh_runner.go:195] Run: ls
	I0315 06:18:43.146456   30683 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:18:43.151988   30683 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:18:43.152020   30683 status.go:422] ha-866665-m03 apiserver status = Running (err=<nil>)
	I0315 06:18:43.152031   30683 status.go:257] ha-866665-m03 status: &{Name:ha-866665-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:18:43.152048   30683 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:18:43.152458   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:43.152518   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:43.168722   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0315 06:18:43.169279   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:43.169791   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:43.169809   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:43.170158   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:43.170389   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:43.172228   30683 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:18:43.172245   30683 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:43.172623   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:43.172671   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:43.187214   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0315 06:18:43.187708   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:43.188255   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:43.188278   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:43.188648   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:43.188865   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:18:43.192093   30683 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:43.192538   30683 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:43.192568   30683 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:43.192776   30683 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:18:43.193120   30683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:43.193161   30683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:43.208093   30683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0315 06:18:43.208524   30683 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:43.209094   30683 main.go:141] libmachine: Using API Version  1
	I0315 06:18:43.209120   30683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:43.209452   30683 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:43.209712   30683 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:43.209928   30683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:18:43.209948   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:43.212999   30683 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:43.213410   30683 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:43.213444   30683 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:43.213553   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:43.213741   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:43.213897   30683 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:43.214059   30683 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:43.297695   30683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:18:43.315649   30683 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.451483519s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m03_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:10:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:10:22.050431   25161 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:10:22.050872   25161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:10:22.050889   25161 out.go:304] Setting ErrFile to fd 2...
	I0315 06:10:22.050896   25161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:10:22.051363   25161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:10:22.052191   25161 out.go:298] Setting JSON to false
	I0315 06:10:22.053167   25161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3118,"bootTime":1710479904,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:10:22.053231   25161 start.go:139] virtualization: kvm guest
	I0315 06:10:22.055390   25161 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:10:22.057035   25161 notify.go:220] Checking for updates...
	I0315 06:10:22.057040   25161 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:10:22.058646   25161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:10:22.060128   25161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:10:22.061381   25161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.062639   25161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:10:22.063930   25161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:10:22.065416   25161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:10:22.098997   25161 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 06:10:22.100271   25161 start.go:297] selected driver: kvm2
	I0315 06:10:22.100298   25161 start.go:901] validating driver "kvm2" against <nil>
	I0315 06:10:22.100318   25161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:10:22.101110   25161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:10:22.101216   25161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:10:22.115761   25161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:10:22.115811   25161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 06:10:22.116046   25161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:10:22.116119   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:10:22.116135   25161 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0315 06:10:22.116145   25161 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 06:10:22.116207   25161 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0315 06:10:22.116318   25161 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:10:22.118196   25161 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:10:22.119336   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:10:22.119381   25161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:10:22.119390   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:10:22.119491   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:10:22.119504   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:10:22.119818   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:10:22.119846   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json: {Name:mke78c2b04ea85297521b7aca846449b5918be83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:22.119987   25161 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:10:22.120042   25161 start.go:364] duration metric: took 38.981µs to acquireMachinesLock for "ha-866665"
	I0315 06:10:22.120069   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:10:22.120175   25161 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 06:10:22.122009   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:10:22.122157   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:10:22.122201   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:10:22.136061   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0315 06:10:22.136495   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:10:22.137081   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:10:22.137108   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:10:22.137486   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:10:22.137695   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:22.137851   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:22.138011   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:10:22.138044   25161 client.go:168] LocalClient.Create starting
	I0315 06:10:22.138078   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:10:22.138111   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:10:22.138127   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:10:22.138179   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:10:22.138196   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:10:22.138209   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:10:22.138224   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:10:22.138236   25161 main.go:141] libmachine: (ha-866665) Calling .PreCreateCheck
	I0315 06:10:22.138543   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:22.138903   25161 main.go:141] libmachine: Creating machine...
	I0315 06:10:22.138916   25161 main.go:141] libmachine: (ha-866665) Calling .Create
	I0315 06:10:22.139046   25161 main.go:141] libmachine: (ha-866665) Creating KVM machine...
	I0315 06:10:22.140180   25161 main.go:141] libmachine: (ha-866665) DBG | found existing default KVM network
	I0315 06:10:22.140833   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.140700   25184 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0315 06:10:22.140858   25161 main.go:141] libmachine: (ha-866665) DBG | created network xml: 
	I0315 06:10:22.140875   25161 main.go:141] libmachine: (ha-866665) DBG | <network>
	I0315 06:10:22.140886   25161 main.go:141] libmachine: (ha-866665) DBG |   <name>mk-ha-866665</name>
	I0315 06:10:22.140895   25161 main.go:141] libmachine: (ha-866665) DBG |   <dns enable='no'/>
	I0315 06:10:22.140905   25161 main.go:141] libmachine: (ha-866665) DBG |   
	I0315 06:10:22.140916   25161 main.go:141] libmachine: (ha-866665) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 06:10:22.140925   25161 main.go:141] libmachine: (ha-866665) DBG |     <dhcp>
	I0315 06:10:22.140942   25161 main.go:141] libmachine: (ha-866665) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 06:10:22.140961   25161 main.go:141] libmachine: (ha-866665) DBG |     </dhcp>
	I0315 06:10:22.140973   25161 main.go:141] libmachine: (ha-866665) DBG |   </ip>
	I0315 06:10:22.140982   25161 main.go:141] libmachine: (ha-866665) DBG |   
	I0315 06:10:22.141038   25161 main.go:141] libmachine: (ha-866665) DBG | </network>
	I0315 06:10:22.141057   25161 main.go:141] libmachine: (ha-866665) DBG | 
	I0315 06:10:22.146019   25161 main.go:141] libmachine: (ha-866665) DBG | trying to create private KVM network mk-ha-866665 192.168.39.0/24...
	I0315 06:10:22.213307   25161 main.go:141] libmachine: (ha-866665) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 ...
	I0315 06:10:22.213341   25161 main.go:141] libmachine: (ha-866665) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:10:22.213352   25161 main.go:141] libmachine: (ha-866665) DBG | private KVM network mk-ha-866665 192.168.39.0/24 created
	I0315 06:10:22.213370   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.213251   25184 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.213430   25161 main.go:141] libmachine: (ha-866665) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:10:22.435287   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.435157   25184 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa...
	I0315 06:10:22.563588   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.563463   25184 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/ha-866665.rawdisk...
	I0315 06:10:22.563613   25161 main.go:141] libmachine: (ha-866665) DBG | Writing magic tar header
	I0315 06:10:22.563624   25161 main.go:141] libmachine: (ha-866665) DBG | Writing SSH key tar header
	I0315 06:10:22.563654   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:22.563616   25184 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 ...
	I0315 06:10:22.563778   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665 (perms=drwx------)
	I0315 06:10:22.563798   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665
	I0315 06:10:22.563809   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:10:22.563823   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:10:22.563834   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:10:22.563844   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:10:22.563858   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:10:22.563867   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:10:22.563879   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:10:22.563886   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:10:22.563897   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:10:22.563908   25161 main.go:141] libmachine: (ha-866665) DBG | Checking permissions on dir: /home
	I0315 06:10:22.563924   25161 main.go:141] libmachine: (ha-866665) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:10:22.563938   25161 main.go:141] libmachine: (ha-866665) DBG | Skipping /home - not owner
	I0315 06:10:22.563950   25161 main.go:141] libmachine: (ha-866665) Creating domain...
	I0315 06:10:22.565044   25161 main.go:141] libmachine: (ha-866665) define libvirt domain using xml: 
	I0315 06:10:22.565069   25161 main.go:141] libmachine: (ha-866665) <domain type='kvm'>
	I0315 06:10:22.565079   25161 main.go:141] libmachine: (ha-866665)   <name>ha-866665</name>
	I0315 06:10:22.565087   25161 main.go:141] libmachine: (ha-866665)   <memory unit='MiB'>2200</memory>
	I0315 06:10:22.565095   25161 main.go:141] libmachine: (ha-866665)   <vcpu>2</vcpu>
	I0315 06:10:22.565105   25161 main.go:141] libmachine: (ha-866665)   <features>
	I0315 06:10:22.565111   25161 main.go:141] libmachine: (ha-866665)     <acpi/>
	I0315 06:10:22.565117   25161 main.go:141] libmachine: (ha-866665)     <apic/>
	I0315 06:10:22.565123   25161 main.go:141] libmachine: (ha-866665)     <pae/>
	I0315 06:10:22.565138   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565146   25161 main.go:141] libmachine: (ha-866665)   </features>
	I0315 06:10:22.565151   25161 main.go:141] libmachine: (ha-866665)   <cpu mode='host-passthrough'>
	I0315 06:10:22.565159   25161 main.go:141] libmachine: (ha-866665)   
	I0315 06:10:22.565167   25161 main.go:141] libmachine: (ha-866665)   </cpu>
	I0315 06:10:22.565197   25161 main.go:141] libmachine: (ha-866665)   <os>
	I0315 06:10:22.565221   25161 main.go:141] libmachine: (ha-866665)     <type>hvm</type>
	I0315 06:10:22.565236   25161 main.go:141] libmachine: (ha-866665)     <boot dev='cdrom'/>
	I0315 06:10:22.565247   25161 main.go:141] libmachine: (ha-866665)     <boot dev='hd'/>
	I0315 06:10:22.565261   25161 main.go:141] libmachine: (ha-866665)     <bootmenu enable='no'/>
	I0315 06:10:22.565271   25161 main.go:141] libmachine: (ha-866665)   </os>
	I0315 06:10:22.565282   25161 main.go:141] libmachine: (ha-866665)   <devices>
	I0315 06:10:22.565298   25161 main.go:141] libmachine: (ha-866665)     <disk type='file' device='cdrom'>
	I0315 06:10:22.565315   25161 main.go:141] libmachine: (ha-866665)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/boot2docker.iso'/>
	I0315 06:10:22.565327   25161 main.go:141] libmachine: (ha-866665)       <target dev='hdc' bus='scsi'/>
	I0315 06:10:22.565340   25161 main.go:141] libmachine: (ha-866665)       <readonly/>
	I0315 06:10:22.565350   25161 main.go:141] libmachine: (ha-866665)     </disk>
	I0315 06:10:22.565361   25161 main.go:141] libmachine: (ha-866665)     <disk type='file' device='disk'>
	I0315 06:10:22.565374   25161 main.go:141] libmachine: (ha-866665)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:10:22.565389   25161 main.go:141] libmachine: (ha-866665)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/ha-866665.rawdisk'/>
	I0315 06:10:22.565402   25161 main.go:141] libmachine: (ha-866665)       <target dev='hda' bus='virtio'/>
	I0315 06:10:22.565410   25161 main.go:141] libmachine: (ha-866665)     </disk>
	I0315 06:10:22.565423   25161 main.go:141] libmachine: (ha-866665)     <interface type='network'>
	I0315 06:10:22.565448   25161 main.go:141] libmachine: (ha-866665)       <source network='mk-ha-866665'/>
	I0315 06:10:22.565461   25161 main.go:141] libmachine: (ha-866665)       <model type='virtio'/>
	I0315 06:10:22.565477   25161 main.go:141] libmachine: (ha-866665)     </interface>
	I0315 06:10:22.565489   25161 main.go:141] libmachine: (ha-866665)     <interface type='network'>
	I0315 06:10:22.565498   25161 main.go:141] libmachine: (ha-866665)       <source network='default'/>
	I0315 06:10:22.565511   25161 main.go:141] libmachine: (ha-866665)       <model type='virtio'/>
	I0315 06:10:22.565542   25161 main.go:141] libmachine: (ha-866665)     </interface>
	I0315 06:10:22.565563   25161 main.go:141] libmachine: (ha-866665)     <serial type='pty'>
	I0315 06:10:22.565576   25161 main.go:141] libmachine: (ha-866665)       <target port='0'/>
	I0315 06:10:22.565586   25161 main.go:141] libmachine: (ha-866665)     </serial>
	I0315 06:10:22.565596   25161 main.go:141] libmachine: (ha-866665)     <console type='pty'>
	I0315 06:10:22.565613   25161 main.go:141] libmachine: (ha-866665)       <target type='serial' port='0'/>
	I0315 06:10:22.565631   25161 main.go:141] libmachine: (ha-866665)     </console>
	I0315 06:10:22.565642   25161 main.go:141] libmachine: (ha-866665)     <rng model='virtio'>
	I0315 06:10:22.565654   25161 main.go:141] libmachine: (ha-866665)       <backend model='random'>/dev/random</backend>
	I0315 06:10:22.565664   25161 main.go:141] libmachine: (ha-866665)     </rng>
	I0315 06:10:22.565672   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565686   25161 main.go:141] libmachine: (ha-866665)     
	I0315 06:10:22.565698   25161 main.go:141] libmachine: (ha-866665)   </devices>
	I0315 06:10:22.565708   25161 main.go:141] libmachine: (ha-866665) </domain>
	I0315 06:10:22.565719   25161 main.go:141] libmachine: (ha-866665) 
	I0315 06:10:22.569993   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:ff:88:6e in network default
	I0315 06:10:22.570558   25161 main.go:141] libmachine: (ha-866665) Ensuring networks are active...
	I0315 06:10:22.570582   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:22.571265   25161 main.go:141] libmachine: (ha-866665) Ensuring network default is active
	I0315 06:10:22.571537   25161 main.go:141] libmachine: (ha-866665) Ensuring network mk-ha-866665 is active
	I0315 06:10:22.572033   25161 main.go:141] libmachine: (ha-866665) Getting domain xml...
	I0315 06:10:22.572727   25161 main.go:141] libmachine: (ha-866665) Creating domain...
	I0315 06:10:23.736605   25161 main.go:141] libmachine: (ha-866665) Waiting to get IP...
	I0315 06:10:23.737432   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:23.737824   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:23.737851   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:23.737801   25184 retry.go:31] will retry after 269.541809ms: waiting for machine to come up
	I0315 06:10:24.009421   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.009981   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.009999   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.009946   25184 retry.go:31] will retry after 355.494322ms: waiting for machine to come up
	I0315 06:10:24.367853   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.368348   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.368367   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.368297   25184 retry.go:31] will retry after 469.840562ms: waiting for machine to come up
	I0315 06:10:24.839880   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:24.840325   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:24.840353   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:24.840295   25184 retry.go:31] will retry after 509.329258ms: waiting for machine to come up
	I0315 06:10:25.351724   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:25.352604   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:25.352629   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:25.352542   25184 retry.go:31] will retry after 724.359107ms: waiting for machine to come up
	I0315 06:10:26.078398   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:26.078770   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:26.078790   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:26.078744   25184 retry.go:31] will retry after 572.771794ms: waiting for machine to come up
	I0315 06:10:26.653590   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:26.654002   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:26.654048   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:26.653957   25184 retry.go:31] will retry after 964.305506ms: waiting for machine to come up
	I0315 06:10:27.619838   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:27.620282   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:27.620316   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:27.620240   25184 retry.go:31] will retry after 1.385577587s: waiting for machine to come up
	I0315 06:10:29.007802   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:29.008244   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:29.008273   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:29.008187   25184 retry.go:31] will retry after 1.288467263s: waiting for machine to come up
	I0315 06:10:30.298780   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:30.299311   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:30.299349   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:30.299245   25184 retry.go:31] will retry after 2.203379402s: waiting for machine to come up
	I0315 06:10:32.503823   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:32.504208   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:32.504234   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:32.504159   25184 retry.go:31] will retry after 2.163155246s: waiting for machine to come up
	I0315 06:10:34.670370   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:34.670822   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:34.670846   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:34.670779   25184 retry.go:31] will retry after 2.490179724s: waiting for machine to come up
	I0315 06:10:37.162916   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:37.163316   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:37.163344   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:37.163272   25184 retry.go:31] will retry after 4.132551358s: waiting for machine to come up
	I0315 06:10:41.300521   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:41.300982   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find current IP address of domain ha-866665 in network mk-ha-866665
	I0315 06:10:41.301009   25161 main.go:141] libmachine: (ha-866665) DBG | I0315 06:10:41.300940   25184 retry.go:31] will retry after 4.068921352s: waiting for machine to come up
	I0315 06:10:45.374044   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.374464   25161 main.go:141] libmachine: (ha-866665) Found IP for machine: 192.168.39.78
	I0315 06:10:45.374481   25161 main.go:141] libmachine: (ha-866665) Reserving static IP address...
	I0315 06:10:45.374490   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has current primary IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.374815   25161 main.go:141] libmachine: (ha-866665) DBG | unable to find host DHCP lease matching {name: "ha-866665", mac: "52:54:00:96:55:9d", ip: "192.168.39.78"} in network mk-ha-866665
	I0315 06:10:45.447565   25161 main.go:141] libmachine: (ha-866665) DBG | Getting to WaitForSSH function...
	I0315 06:10:45.447590   25161 main.go:141] libmachine: (ha-866665) Reserved static IP address: 192.168.39.78
	I0315 06:10:45.447603   25161 main.go:141] libmachine: (ha-866665) Waiting for SSH to be available...
	I0315 06:10:45.450145   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.450497   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.450531   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.450621   25161 main.go:141] libmachine: (ha-866665) DBG | Using SSH client type: external
	I0315 06:10:45.450650   25161 main.go:141] libmachine: (ha-866665) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa (-rw-------)
	I0315 06:10:45.450677   25161 main.go:141] libmachine: (ha-866665) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:10:45.450686   25161 main.go:141] libmachine: (ha-866665) DBG | About to run SSH command:
	I0315 06:10:45.450698   25161 main.go:141] libmachine: (ha-866665) DBG | exit 0
	I0315 06:10:45.572600   25161 main.go:141] libmachine: (ha-866665) DBG | SSH cmd err, output: <nil>: 
	I0315 06:10:45.572916   25161 main.go:141] libmachine: (ha-866665) KVM machine creation complete!
	I0315 06:10:45.573224   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:45.573796   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:45.573975   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:45.574136   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:10:45.574152   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:10:45.575354   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:10:45.575369   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:10:45.575375   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:10:45.575380   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.577589   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.577839   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.577868   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.578001   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.578154   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.578339   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.578514   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.578725   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.578933   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.578951   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:10:45.675997   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:10:45.676016   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:10:45.676023   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.678790   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.679151   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.679177   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.679280   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.679507   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.679684   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.679843   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.679981   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.680200   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.680214   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:10:45.777471   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:10:45.777553   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:10:45.777564   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:10:45.777573   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:45.777807   25161 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:10:45.777835   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:45.777991   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.780835   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.781144   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.781177   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.781327   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.781526   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.781711   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.781817   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.782015   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.782175   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.782186   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:10:45.894829   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:10:45.894868   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:45.897660   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.897993   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:45.898016   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:45.898172   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:45.898396   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.898570   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:45.898748   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:45.898911   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:45.899066   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:45.899095   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:10:46.006028   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:10:46.006060   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:10:46.006079   25161 buildroot.go:174] setting up certificates
	I0315 06:10:46.006091   25161 provision.go:84] configureAuth start
	I0315 06:10:46.006099   25161 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:10:46.006401   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.008911   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.009300   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.009328   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.009472   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.011698   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.012123   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.012153   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.012399   25161 provision.go:143] copyHostCerts
	I0315 06:10:46.012428   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:10:46.012489   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:10:46.012501   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:10:46.012567   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:10:46.012672   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:10:46.012694   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:10:46.012699   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:10:46.012727   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:10:46.012770   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:10:46.012792   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:10:46.012799   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:10:46.012819   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:10:46.012862   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:10:46.114579   25161 provision.go:177] copyRemoteCerts
	I0315 06:10:46.114641   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:10:46.114669   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.117364   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.117780   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.117809   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.118021   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.118212   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.118390   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.118526   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.199310   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:10:46.199373   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:10:46.224003   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:10:46.224106   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0315 06:10:46.248435   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:10:46.248523   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:10:46.272294   25161 provision.go:87] duration metric: took 266.191988ms to configureAuth
	I0315 06:10:46.272328   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:10:46.272538   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:10:46.272627   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.275562   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.275981   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.276023   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.276163   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.276385   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.276517   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.276701   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.276867   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:46.277048   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:46.277071   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:10:46.538977   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:10:46.539024   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:10:46.539032   25161 main.go:141] libmachine: (ha-866665) Calling .GetURL
	I0315 06:10:46.540356   25161 main.go:141] libmachine: (ha-866665) DBG | Using libvirt version 6000000
	I0315 06:10:46.542333   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.542620   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.542639   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.542807   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:10:46.542826   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:10:46.542833   25161 client.go:171] duration metric: took 24.404778843s to LocalClient.Create
	I0315 06:10:46.542857   25161 start.go:167] duration metric: took 24.404846145s to libmachine.API.Create "ha-866665"
	I0315 06:10:46.542870   25161 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:10:46.542883   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:10:46.542915   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.543138   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:10:46.543163   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.545171   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.545465   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.545497   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.545595   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.545782   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.545957   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.546062   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.623204   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:10:46.627555   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:10:46.627579   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:10:46.627705   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:10:46.627795   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:10:46.627806   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:10:46.627895   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:10:46.638848   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:10:46.666574   25161 start.go:296] duration metric: took 123.69068ms for postStartSetup
	I0315 06:10:46.666628   25161 main.go:141] libmachine: (ha-866665) Calling .GetConfigRaw
	I0315 06:10:46.667229   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.669803   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.670172   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.670194   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.670420   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:10:46.670631   25161 start.go:128] duration metric: took 24.550442544s to createHost
	I0315 06:10:46.670659   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.672755   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.673063   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.673088   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.673196   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.673370   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.673556   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.673663   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.673817   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:10:46.674009   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:10:46.674028   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:10:46.773443   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483046.746376540
	
	I0315 06:10:46.773467   25161 fix.go:216] guest clock: 1710483046.746376540
	I0315 06:10:46.773477   25161 fix.go:229] Guest: 2024-03-15 06:10:46.74637654 +0000 UTC Remote: 2024-03-15 06:10:46.670646135 +0000 UTC m=+24.668914568 (delta=75.730405ms)
	I0315 06:10:46.773518   25161 fix.go:200] guest clock delta is within tolerance: 75.730405ms
	I0315 06:10:46.773527   25161 start.go:83] releasing machines lock for "ha-866665", held for 24.653469865s
	I0315 06:10:46.773549   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.773840   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:46.776569   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.776912   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.776943   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.777132   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777661   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777840   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:10:46.777938   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:10:46.777981   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.778075   25161 ssh_runner.go:195] Run: cat /version.json
	I0315 06:10:46.778103   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:10:46.780425   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780612   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780828   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.780855   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780963   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:46.780985   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:46.780996   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.781148   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:10:46.781201   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.781295   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:10:46.781371   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.781424   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:10:46.781502   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.781565   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:10:46.853380   25161 ssh_runner.go:195] Run: systemctl --version
	I0315 06:10:46.890714   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:10:47.062319   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:10:47.068972   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:10:47.069031   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:10:47.087360   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:10:47.087388   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:10:47.087454   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:10:47.103753   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:10:47.118832   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:10:47.118898   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:10:47.133344   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:10:47.148065   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:10:47.257782   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:10:47.415025   25161 docker.go:233] disabling docker service ...
	I0315 06:10:47.415117   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:10:47.430257   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:10:47.443144   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:10:47.565290   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:10:47.683033   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:10:47.698205   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:10:47.717813   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:10:47.717877   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.729049   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:10:47.729112   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.739834   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.750874   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:10:47.761604   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:10:47.772612   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:10:47.782572   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:10:47.782627   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:10:47.797200   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:10:47.807675   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:10:47.926805   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:10:48.064995   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:10:48.065064   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:10:48.070184   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:10:48.070231   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:10:48.074107   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:10:48.111051   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:10:48.111120   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:10:48.139812   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:10:48.171363   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:10:48.172663   25161 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:10:48.175331   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:48.175663   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:10:48.175690   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:10:48.175866   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:10:48.180029   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:10:48.193238   25161 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:10:48.193374   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:10:48.193425   25161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:10:48.225832   25161 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 06:10:48.225887   25161 ssh_runner.go:195] Run: which lz4
	I0315 06:10:48.229904   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0315 06:10:48.229974   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 06:10:48.234179   25161 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 06:10:48.234210   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 06:10:49.956064   25161 crio.go:444] duration metric: took 1.726111064s to copy over tarball
	I0315 06:10:49.956128   25161 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 06:10:52.358393   25161 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.402235402s)
	I0315 06:10:52.358430   25161 crio.go:451] duration metric: took 2.40234102s to extract the tarball
	I0315 06:10:52.358440   25161 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 06:10:52.402370   25161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:10:52.448534   25161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:10:52.448561   25161 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:10:52.448571   25161 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:10:52.448707   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:10:52.448780   25161 ssh_runner.go:195] Run: crio config
	I0315 06:10:52.493214   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:10:52.493238   25161 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 06:10:52.493249   25161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:10:52.493267   25161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:10:52.493394   25161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:10:52.493424   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:10:52.493481   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:10:52.511497   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:10:52.511618   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:10:52.511684   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:10:52.521808   25161 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:10:52.521872   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:10:52.531706   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:10:52.548963   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:10:52.565745   25161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:10:52.583246   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:10:52.600918   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:10:52.605045   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:10:52.617352   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:10:52.732776   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:10:52.749351   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:10:52.749373   25161 certs.go:194] generating shared ca certs ...
	I0315 06:10:52.749386   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.749522   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:10:52.749561   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:10:52.749569   25161 certs.go:256] generating profile certs ...
	I0315 06:10:52.749625   25161 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:10:52.749639   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt with IP's: []
	I0315 06:10:52.812116   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt ...
	I0315 06:10:52.812142   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt: {Name:mke5907f5cfc66a67f0f76eff96e868fbd1233e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.812324   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key ...
	I0315 06:10:52.812337   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key: {Name:mkdc7da3f09b5ab449f3abedb8f51edf6d84c254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.812415   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926
	I0315 06:10:52.812430   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.254]
	I0315 06:10:52.886122   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 ...
	I0315 06:10:52.886158   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926: {Name:mk2e805aca2504c2638efb9dda22ab0fed9ba051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.886335   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926 ...
	I0315 06:10:52.886351   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926: {Name:mk69e895b0b36226f84d4728c7b95565f24b0bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:52.886424   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.c37ea926 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:10:52.886513   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.c37ea926 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:10:52.886564   25161 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:10:52.886582   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt with IP's: []
	I0315 06:10:53.069389   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt ...
	I0315 06:10:53.069418   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt: {Name:mk3ae531538aaa57a97c1b9779a2bc292afd5f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:53.069560   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key ...
	I0315 06:10:53.069571   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key: {Name:mk39feed49c56fa9080f460282da6bba51dd9975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:10:53.069646   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:10:53.069663   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:10:53.069672   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:10:53.069691   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:10:53.069703   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:10:53.069713   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:10:53.069725   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:10:53.069735   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:10:53.069785   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:10:53.069818   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:10:53.069832   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:10:53.069855   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:10:53.069876   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:10:53.069901   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:10:53.069940   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:10:53.069965   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.069978   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.069989   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.070515   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:10:53.099857   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:10:53.126915   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:10:53.153257   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:10:53.178995   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 06:10:53.205162   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:10:53.231878   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:10:53.258021   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:10:53.285933   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:10:53.313170   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:10:53.338880   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:10:53.364443   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:10:53.382241   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:10:53.388428   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:10:53.400996   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.405861   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.405912   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:10:53.411876   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:10:53.423763   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:10:53.435979   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.440603   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.440657   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:10:53.446520   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:10:53.458396   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:10:53.470239   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.475125   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.475179   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:10:53.483367   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:10:53.495783   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:10:53.500376   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:10:53.500435   25161 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:10:53.500549   25161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:10:53.500610   25161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:10:53.544607   25161 cri.go:89] found id: ""
	I0315 06:10:53.544672   25161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 06:10:53.558526   25161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 06:10:53.570799   25161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 06:10:53.589500   25161 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 06:10:53.589522   25161 kubeadm.go:156] found existing configuration files:
	
	I0315 06:10:53.589575   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 06:10:53.601642   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 06:10:53.601713   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 06:10:53.614300   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 06:10:53.627314   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 06:10:53.627371   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 06:10:53.639841   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 06:10:53.651949   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 06:10:53.652023   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 06:10:53.662956   25161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 06:10:53.672956   25161 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 06:10:53.673035   25161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 06:10:53.683576   25161 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 06:10:53.791497   25161 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 06:10:53.791603   25161 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 06:10:53.926570   25161 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 06:10:53.926725   25161 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 06:10:53.926884   25161 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 06:10:54.140322   25161 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 06:10:54.240759   25161 out.go:204]   - Generating certificates and keys ...
	I0315 06:10:54.240858   25161 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 06:10:54.240936   25161 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 06:10:54.315095   25161 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 06:10:54.736716   25161 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 06:10:54.813228   25161 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 06:10:55.115299   25161 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 06:10:55.224421   25161 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 06:10:55.224597   25161 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-866665 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0315 06:10:55.282784   25161 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 06:10:55.283087   25161 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-866665 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I0315 06:10:55.657171   25161 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 06:10:55.822466   25161 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 06:10:56.141839   25161 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 06:10:56.142014   25161 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 06:10:56.343288   25161 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 06:10:56.482472   25161 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 06:10:56.614382   25161 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 06:10:56.813589   25161 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 06:10:56.814099   25161 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 06:10:56.818901   25161 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 06:10:56.820947   25161 out.go:204]   - Booting up control plane ...
	I0315 06:10:56.821044   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 06:10:56.821113   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 06:10:56.821172   25161 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 06:10:56.835970   25161 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 06:10:56.836936   25161 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 06:10:56.836985   25161 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 06:10:56.968185   25161 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 06:11:04.062364   25161 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.098252 seconds
	I0315 06:11:04.062500   25161 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 06:11:04.085094   25161 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 06:11:04.618994   25161 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 06:11:04.619194   25161 kubeadm.go:309] [mark-control-plane] Marking the node ha-866665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 06:11:05.133303   25161 kubeadm.go:309] [bootstrap-token] Using token: kltubs.8avr8euk1lbixl0k
	I0315 06:11:05.134809   25161 out.go:204]   - Configuring RBAC rules ...
	I0315 06:11:05.134931   25161 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 06:11:05.140662   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 06:11:05.148671   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 06:11:05.152686   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 06:11:05.160280   25161 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 06:11:05.164264   25161 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 06:11:05.180896   25161 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 06:11:05.429159   25161 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 06:11:05.547540   25161 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 06:11:05.552787   25161 kubeadm.go:309] 
	I0315 06:11:05.552861   25161 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 06:11:05.552878   25161 kubeadm.go:309] 
	I0315 06:11:05.553011   25161 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 06:11:05.553022   25161 kubeadm.go:309] 
	I0315 06:11:05.553048   25161 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 06:11:05.553156   25161 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 06:11:05.553235   25161 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 06:11:05.553246   25161 kubeadm.go:309] 
	I0315 06:11:05.553318   25161 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 06:11:05.553328   25161 kubeadm.go:309] 
	I0315 06:11:05.553430   25161 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 06:11:05.553448   25161 kubeadm.go:309] 
	I0315 06:11:05.553539   25161 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 06:11:05.553645   25161 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 06:11:05.553766   25161 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 06:11:05.553786   25161 kubeadm.go:309] 
	I0315 06:11:05.553906   25161 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 06:11:05.554025   25161 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 06:11:05.554035   25161 kubeadm.go:309] 
	I0315 06:11:05.554150   25161 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kltubs.8avr8euk1lbixl0k \
	I0315 06:11:05.554260   25161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 06:11:05.554282   25161 kubeadm.go:309] 	--control-plane 
	I0315 06:11:05.554286   25161 kubeadm.go:309] 
	I0315 06:11:05.554353   25161 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 06:11:05.554360   25161 kubeadm.go:309] 
	I0315 06:11:05.554457   25161 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kltubs.8avr8euk1lbixl0k \
	I0315 06:11:05.554581   25161 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 06:11:05.563143   25161 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 06:11:05.563178   25161 cni.go:84] Creating CNI manager for ""
	I0315 06:11:05.563194   25161 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 06:11:05.564745   25161 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 06:11:05.565921   25161 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 06:11:05.573563   25161 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 06:11:05.573581   25161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 06:11:05.666813   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 06:11:06.516825   25161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 06:11:06.516866   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:06.516974   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665 minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=true
	I0315 06:11:06.656186   25161 ops.go:34] apiserver oom_adj: -16
	I0315 06:11:06.656606   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:07.157415   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:07.657228   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:08.157249   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:08.657447   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:09.156869   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:09.656995   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:10.156634   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:10.657577   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:11.156792   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:11.656723   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:12.157097   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:12.657699   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:13.156802   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:13.657548   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:14.157618   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:14.657657   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:15.156920   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:15.657599   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:16.157545   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:16.657334   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.157318   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.657750   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 06:11:17.778194   25161 kubeadm.go:1107] duration metric: took 11.261376371s to wait for elevateKubeSystemPrivileges
	W0315 06:11:17.778241   25161 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 06:11:17.778251   25161 kubeadm.go:393] duration metric: took 24.277818857s to StartCluster
	I0315 06:11:17.778266   25161 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:17.778330   25161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:11:17.778982   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:17.779207   25161 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:17.779227   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:11:17.779215   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 06:11:17.779293   25161 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 06:11:17.779361   25161 addons.go:69] Setting storage-provisioner=true in profile "ha-866665"
	I0315 06:11:17.779385   25161 addons.go:69] Setting default-storageclass=true in profile "ha-866665"
	I0315 06:11:17.779410   25161 addons.go:234] Setting addon storage-provisioner=true in "ha-866665"
	I0315 06:11:17.779421   25161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-866665"
	I0315 06:11:17.779433   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:17.779443   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:17.779833   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.779872   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.780042   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.780106   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.794793   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 06:11:17.795024   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0315 06:11:17.795216   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.795393   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.795754   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.795777   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.795872   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.795911   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.796136   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.796213   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.796362   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.796717   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.796759   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.798343   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:11:17.798565   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 06:11:17.798962   25161 cert_rotation.go:137] Starting client certificate rotation controller
	I0315 06:11:17.799201   25161 addons.go:234] Setting addon default-storageclass=true in "ha-866665"
	I0315 06:11:17.799236   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:17.799586   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.799630   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.812556   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0315 06:11:17.813066   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.813620   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.813642   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.814018   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.814195   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.815112   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0315 06:11:17.815506   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.816010   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.816032   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.816143   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:17.816359   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.818651   25161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:11:17.816931   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:17.820268   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:17.820379   25161 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:11:17.820397   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 06:11:17.820416   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:17.823295   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.823681   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:17.823747   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.823855   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:17.824034   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:17.824173   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:17.824333   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:17.835187   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0315 06:11:17.835566   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:17.835986   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:17.836016   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:17.836398   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:17.836581   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:17.838121   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:17.838341   25161 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 06:11:17.838352   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 06:11:17.838364   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:17.840844   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.841296   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:17.841319   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:17.841427   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:17.841599   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:17.841756   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:17.841873   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:18.004520   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 06:11:18.033029   25161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 06:11:18.056028   25161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:11:18.830505   25161 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 06:11:18.830594   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.830618   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.830888   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.830908   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.830948   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.830960   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.830970   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.831187   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.831205   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.831214   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.831316   25161 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0315 06:11:18.831324   25161 round_trippers.go:469] Request Headers:
	I0315 06:11:18.831334   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:11:18.831339   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:11:18.842577   25161 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0315 06:11:18.843182   25161 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0315 06:11:18.843197   25161 round_trippers.go:469] Request Headers:
	I0315 06:11:18.843208   25161 round_trippers.go:473]     Content-Type: application/json
	I0315 06:11:18.843214   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:11:18.843219   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:11:18.846442   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:11:18.846656   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.846674   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.846920   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.846941   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.846962   25161 main.go:141] libmachine: (ha-866665) DBG | Closing plugin on server side
	I0315 06:11:18.969602   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.969629   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.969936   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.969954   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.969964   25161 main.go:141] libmachine: Making call to close driver server
	I0315 06:11:18.969972   25161 main.go:141] libmachine: (ha-866665) Calling .Close
	I0315 06:11:18.970192   25161 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:11:18.970204   25161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:11:18.972182   25161 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0315 06:11:18.973516   25161 addons.go:505] duration metric: took 1.194224795s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0315 06:11:18.973563   25161 start.go:245] waiting for cluster config update ...
	I0315 06:11:18.973582   25161 start.go:254] writing updated cluster config ...
	I0315 06:11:18.975206   25161 out.go:177] 
	I0315 06:11:18.976662   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:18.976735   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:18.978300   25161 out.go:177] * Starting "ha-866665-m02" control-plane node in "ha-866665" cluster
	I0315 06:11:18.979766   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:11:18.979803   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:11:18.979917   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:11:18.979932   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:11:18.980000   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:18.980245   25161 start.go:360] acquireMachinesLock for ha-866665-m02: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:11:18.980293   25161 start.go:364] duration metric: took 27.2µs to acquireMachinesLock for "ha-866665-m02"
	I0315 06:11:18.980316   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:18.980411   25161 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0315 06:11:18.982711   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:11:18.982794   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:18.982826   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:18.997393   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0315 06:11:18.997850   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:18.998314   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:18.998335   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:18.998666   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:18.998819   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:18.998972   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:18.999185   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:11:18.999209   25161 client.go:168] LocalClient.Create starting
	I0315 06:11:18.999242   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:11:18.999284   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:11:18.999300   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:11:18.999342   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:11:18.999360   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:11:18.999371   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:11:18.999385   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:11:18.999393   25161 main.go:141] libmachine: (ha-866665-m02) Calling .PreCreateCheck
	I0315 06:11:18.999563   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:18.999936   25161 main.go:141] libmachine: Creating machine...
	I0315 06:11:18.999949   25161 main.go:141] libmachine: (ha-866665-m02) Calling .Create
	I0315 06:11:19.000111   25161 main.go:141] libmachine: (ha-866665-m02) Creating KVM machine...
	I0315 06:11:19.001375   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found existing default KVM network
	I0315 06:11:19.001562   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found existing private KVM network mk-ha-866665
	I0315 06:11:19.001732   25161 main.go:141] libmachine: (ha-866665-m02) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 ...
	I0315 06:11:19.001756   25161 main.go:141] libmachine: (ha-866665-m02) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:11:19.001804   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.001716   25510 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:11:19.001913   25161 main.go:141] libmachine: (ha-866665-m02) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:11:19.212214   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.212055   25510 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa...
	I0315 06:11:19.452618   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.452478   25510 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/ha-866665-m02.rawdisk...
	I0315 06:11:19.452640   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Writing magic tar header
	I0315 06:11:19.452650   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Writing SSH key tar header
	I0315 06:11:19.452658   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:19.452622   25510 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 ...
	I0315 06:11:19.452745   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02
	I0315 06:11:19.452762   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:11:19.452807   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02 (perms=drwx------)
	I0315 06:11:19.452837   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:11:19.452848   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:11:19.452866   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:11:19.452879   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:11:19.452889   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:11:19.452900   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Checking permissions on dir: /home
	I0315 06:11:19.452915   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Skipping /home - not owner
	I0315 06:11:19.452944   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:11:19.452981   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:11:19.452997   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:11:19.453009   25161 main.go:141] libmachine: (ha-866665-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:11:19.453019   25161 main.go:141] libmachine: (ha-866665-m02) Creating domain...
	I0315 06:11:19.454021   25161 main.go:141] libmachine: (ha-866665-m02) define libvirt domain using xml: 
	I0315 06:11:19.454038   25161 main.go:141] libmachine: (ha-866665-m02) <domain type='kvm'>
	I0315 06:11:19.454062   25161 main.go:141] libmachine: (ha-866665-m02)   <name>ha-866665-m02</name>
	I0315 06:11:19.454072   25161 main.go:141] libmachine: (ha-866665-m02)   <memory unit='MiB'>2200</memory>
	I0315 06:11:19.454097   25161 main.go:141] libmachine: (ha-866665-m02)   <vcpu>2</vcpu>
	I0315 06:11:19.454114   25161 main.go:141] libmachine: (ha-866665-m02)   <features>
	I0315 06:11:19.454127   25161 main.go:141] libmachine: (ha-866665-m02)     <acpi/>
	I0315 06:11:19.454138   25161 main.go:141] libmachine: (ha-866665-m02)     <apic/>
	I0315 06:11:19.454150   25161 main.go:141] libmachine: (ha-866665-m02)     <pae/>
	I0315 06:11:19.454159   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454178   25161 main.go:141] libmachine: (ha-866665-m02)   </features>
	I0315 06:11:19.454190   25161 main.go:141] libmachine: (ha-866665-m02)   <cpu mode='host-passthrough'>
	I0315 06:11:19.454202   25161 main.go:141] libmachine: (ha-866665-m02)   
	I0315 06:11:19.454217   25161 main.go:141] libmachine: (ha-866665-m02)   </cpu>
	I0315 06:11:19.454230   25161 main.go:141] libmachine: (ha-866665-m02)   <os>
	I0315 06:11:19.454241   25161 main.go:141] libmachine: (ha-866665-m02)     <type>hvm</type>
	I0315 06:11:19.454252   25161 main.go:141] libmachine: (ha-866665-m02)     <boot dev='cdrom'/>
	I0315 06:11:19.454263   25161 main.go:141] libmachine: (ha-866665-m02)     <boot dev='hd'/>
	I0315 06:11:19.454274   25161 main.go:141] libmachine: (ha-866665-m02)     <bootmenu enable='no'/>
	I0315 06:11:19.454290   25161 main.go:141] libmachine: (ha-866665-m02)   </os>
	I0315 06:11:19.454302   25161 main.go:141] libmachine: (ha-866665-m02)   <devices>
	I0315 06:11:19.454314   25161 main.go:141] libmachine: (ha-866665-m02)     <disk type='file' device='cdrom'>
	I0315 06:11:19.454332   25161 main.go:141] libmachine: (ha-866665-m02)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/boot2docker.iso'/>
	I0315 06:11:19.454343   25161 main.go:141] libmachine: (ha-866665-m02)       <target dev='hdc' bus='scsi'/>
	I0315 06:11:19.454372   25161 main.go:141] libmachine: (ha-866665-m02)       <readonly/>
	I0315 06:11:19.454389   25161 main.go:141] libmachine: (ha-866665-m02)     </disk>
	I0315 06:11:19.454397   25161 main.go:141] libmachine: (ha-866665-m02)     <disk type='file' device='disk'>
	I0315 06:11:19.454409   25161 main.go:141] libmachine: (ha-866665-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:11:19.454444   25161 main.go:141] libmachine: (ha-866665-m02)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/ha-866665-m02.rawdisk'/>
	I0315 06:11:19.454468   25161 main.go:141] libmachine: (ha-866665-m02)       <target dev='hda' bus='virtio'/>
	I0315 06:11:19.454484   25161 main.go:141] libmachine: (ha-866665-m02)     </disk>
	I0315 06:11:19.454501   25161 main.go:141] libmachine: (ha-866665-m02)     <interface type='network'>
	I0315 06:11:19.454515   25161 main.go:141] libmachine: (ha-866665-m02)       <source network='mk-ha-866665'/>
	I0315 06:11:19.454526   25161 main.go:141] libmachine: (ha-866665-m02)       <model type='virtio'/>
	I0315 06:11:19.454539   25161 main.go:141] libmachine: (ha-866665-m02)     </interface>
	I0315 06:11:19.454550   25161 main.go:141] libmachine: (ha-866665-m02)     <interface type='network'>
	I0315 06:11:19.454558   25161 main.go:141] libmachine: (ha-866665-m02)       <source network='default'/>
	I0315 06:11:19.454570   25161 main.go:141] libmachine: (ha-866665-m02)       <model type='virtio'/>
	I0315 06:11:19.454579   25161 main.go:141] libmachine: (ha-866665-m02)     </interface>
	I0315 06:11:19.454590   25161 main.go:141] libmachine: (ha-866665-m02)     <serial type='pty'>
	I0315 06:11:19.454608   25161 main.go:141] libmachine: (ha-866665-m02)       <target port='0'/>
	I0315 06:11:19.454623   25161 main.go:141] libmachine: (ha-866665-m02)     </serial>
	I0315 06:11:19.454637   25161 main.go:141] libmachine: (ha-866665-m02)     <console type='pty'>
	I0315 06:11:19.454656   25161 main.go:141] libmachine: (ha-866665-m02)       <target type='serial' port='0'/>
	I0315 06:11:19.454697   25161 main.go:141] libmachine: (ha-866665-m02)     </console>
	I0315 06:11:19.454721   25161 main.go:141] libmachine: (ha-866665-m02)     <rng model='virtio'>
	I0315 06:11:19.454741   25161 main.go:141] libmachine: (ha-866665-m02)       <backend model='random'>/dev/random</backend>
	I0315 06:11:19.454753   25161 main.go:141] libmachine: (ha-866665-m02)     </rng>
	I0315 06:11:19.454763   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454773   25161 main.go:141] libmachine: (ha-866665-m02)     
	I0315 06:11:19.454782   25161 main.go:141] libmachine: (ha-866665-m02)   </devices>
	I0315 06:11:19.454803   25161 main.go:141] libmachine: (ha-866665-m02) </domain>
	I0315 06:11:19.454817   25161 main.go:141] libmachine: (ha-866665-m02) 
	I0315 06:11:19.461775   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:2c:8a:b0 in network default
	I0315 06:11:19.462320   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring networks are active...
	I0315 06:11:19.462341   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:19.463146   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring network default is active
	I0315 06:11:19.463477   25161 main.go:141] libmachine: (ha-866665-m02) Ensuring network mk-ha-866665 is active
	I0315 06:11:19.463904   25161 main.go:141] libmachine: (ha-866665-m02) Getting domain xml...
	I0315 06:11:19.464636   25161 main.go:141] libmachine: (ha-866665-m02) Creating domain...
	I0315 06:11:20.671961   25161 main.go:141] libmachine: (ha-866665-m02) Waiting to get IP...
	I0315 06:11:20.672880   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:20.673319   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:20.673382   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:20.673313   25510 retry.go:31] will retry after 238.477447ms: waiting for machine to come up
	I0315 06:11:20.913926   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:20.914405   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:20.914428   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:20.914373   25510 retry.go:31] will retry after 314.77947ms: waiting for machine to come up
	I0315 06:11:21.230707   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:21.231215   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:21.231255   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:21.231181   25510 retry.go:31] will retry after 448.854491ms: waiting for machine to come up
	I0315 06:11:21.681861   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:21.682358   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:21.682388   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:21.682292   25510 retry.go:31] will retry after 371.773993ms: waiting for machine to come up
	I0315 06:11:22.055701   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:22.056084   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:22.056115   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:22.056007   25510 retry.go:31] will retry after 740.031821ms: waiting for machine to come up
	I0315 06:11:22.797893   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:22.798351   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:22.798402   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:22.798305   25510 retry.go:31] will retry after 599.3896ms: waiting for machine to come up
	I0315 06:11:23.399029   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:23.399566   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:23.399590   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:23.399509   25510 retry.go:31] will retry after 1.146745032s: waiting for machine to come up
	I0315 06:11:24.548189   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:24.548620   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:24.548644   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:24.548518   25510 retry.go:31] will retry after 1.283100132s: waiting for machine to come up
	I0315 06:11:25.833853   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:25.834293   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:25.834322   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:25.834252   25510 retry.go:31] will retry after 1.779659298s: waiting for machine to come up
	I0315 06:11:27.616200   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:27.616664   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:27.616690   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:27.616626   25510 retry.go:31] will retry after 1.75877657s: waiting for machine to come up
	I0315 06:11:29.376614   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:29.377098   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:29.377123   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:29.377056   25510 retry.go:31] will retry after 2.667490999s: waiting for machine to come up
	I0315 06:11:32.046591   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:32.046965   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:32.046991   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:32.046928   25510 retry.go:31] will retry after 3.546712049s: waiting for machine to come up
	I0315 06:11:35.595780   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:35.596299   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:35.596323   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:35.596248   25510 retry.go:31] will retry after 3.690333447s: waiting for machine to come up
	I0315 06:11:39.287776   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:39.288235   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find current IP address of domain ha-866665-m02 in network mk-ha-866665
	I0315 06:11:39.288263   25161 main.go:141] libmachine: (ha-866665-m02) DBG | I0315 06:11:39.288190   25510 retry.go:31] will retry after 5.596711816s: waiting for machine to come up
	I0315 06:11:44.886163   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.886584   25161 main.go:141] libmachine: (ha-866665-m02) Found IP for machine: 192.168.39.27
	I0315 06:11:44.886607   25161 main.go:141] libmachine: (ha-866665-m02) Reserving static IP address...
	I0315 06:11:44.886619   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.887066   25161 main.go:141] libmachine: (ha-866665-m02) DBG | unable to find host DHCP lease matching {name: "ha-866665-m02", mac: "52:54:00:fa:e0:d5", ip: "192.168.39.27"} in network mk-ha-866665
	I0315 06:11:44.960481   25161 main.go:141] libmachine: (ha-866665-m02) Reserved static IP address: 192.168.39.27
	I0315 06:11:44.960506   25161 main.go:141] libmachine: (ha-866665-m02) Waiting for SSH to be available...
	I0315 06:11:44.960552   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Getting to WaitForSSH function...
	I0315 06:11:44.962954   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.963264   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:44.963296   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:44.963451   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using SSH client type: external
	I0315 06:11:44.963479   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa (-rw-------)
	I0315 06:11:44.963517   25161 main.go:141] libmachine: (ha-866665-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:11:44.963538   25161 main.go:141] libmachine: (ha-866665-m02) DBG | About to run SSH command:
	I0315 06:11:44.963555   25161 main.go:141] libmachine: (ha-866665-m02) DBG | exit 0
	I0315 06:11:45.093093   25161 main.go:141] libmachine: (ha-866665-m02) DBG | SSH cmd err, output: <nil>: 
	I0315 06:11:45.093395   25161 main.go:141] libmachine: (ha-866665-m02) KVM machine creation complete!
	I0315 06:11:45.093717   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:45.094288   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:45.094511   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:45.094683   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:11:45.094697   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:11:45.096173   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:11:45.096188   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:11:45.096194   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:11:45.096199   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.098422   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.098859   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.098892   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.099003   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.099170   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.099336   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.099498   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.099660   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.099916   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.099932   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:11:45.208077   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:11:45.208104   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:11:45.208115   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.211003   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.211441   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.211472   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.211761   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.211963   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.212138   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.212291   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.212491   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.212649   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.212672   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:11:45.325589   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:11:45.325673   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:11:45.325686   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:11:45.325701   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.325978   25161 buildroot.go:166] provisioning hostname "ha-866665-m02"
	I0315 06:11:45.326014   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.326192   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.328903   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.329329   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.329357   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.329487   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.329661   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.329814   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.329939   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.330097   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.330278   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.330294   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665-m02 && echo "ha-866665-m02" | sudo tee /etc/hostname
	I0315 06:11:45.451730   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665-m02
	
	I0315 06:11:45.451779   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.454743   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.455063   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.455088   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.455261   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.455462   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.455626   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.455751   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.455918   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:45.456074   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:45.456090   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:11:45.574451   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:11:45.574505   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:11:45.574529   25161 buildroot.go:174] setting up certificates
	I0315 06:11:45.574544   25161 provision.go:84] configureAuth start
	I0315 06:11:45.574564   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetMachineName
	I0315 06:11:45.574872   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:45.577470   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.577872   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.577888   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.578042   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.580303   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.580661   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.580694   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.580845   25161 provision.go:143] copyHostCerts
	I0315 06:11:45.580875   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:11:45.580917   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:11:45.580928   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:11:45.581068   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:11:45.581189   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:11:45.581214   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:11:45.581221   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:11:45.581259   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:11:45.581357   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:11:45.581381   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:11:45.581386   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:11:45.581418   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:11:45.581497   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665-m02 san=[127.0.0.1 192.168.39.27 ha-866665-m02 localhost minikube]
	I0315 06:11:45.989846   25161 provision.go:177] copyRemoteCerts
	I0315 06:11:45.989902   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:11:45.989924   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:45.992909   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.993324   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:45.993356   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:45.993555   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:45.993777   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:45.993938   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:45.994060   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.081396   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:11:46.081473   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:11:46.109864   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:11:46.109938   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 06:11:46.137707   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:11:46.137790   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:11:46.164843   25161 provision.go:87] duration metric: took 590.282213ms to configureAuth
	I0315 06:11:46.164875   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:11:46.165037   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:46.165114   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.168318   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.168773   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.168796   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.169008   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.169194   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.169349   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.169468   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.169652   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:46.169818   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:46.169834   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:11:46.453910   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:11:46.453940   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:11:46.453950   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetURL
	I0315 06:11:46.455357   25161 main.go:141] libmachine: (ha-866665-m02) DBG | Using libvirt version 6000000
	I0315 06:11:46.458465   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.458944   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.458970   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.459139   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:11:46.459162   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:11:46.459170   25161 client.go:171] duration metric: took 27.459953429s to LocalClient.Create
	I0315 06:11:46.459197   25161 start.go:167] duration metric: took 27.460010575s to libmachine.API.Create "ha-866665"
	I0315 06:11:46.459209   25161 start.go:293] postStartSetup for "ha-866665-m02" (driver="kvm2")
	I0315 06:11:46.459224   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:11:46.459279   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.459554   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:11:46.459580   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.461984   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.462358   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.462377   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.462538   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.462718   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.462841   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.462983   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.549717   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:11:46.554606   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:11:46.554634   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:11:46.554712   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:11:46.554797   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:11:46.554808   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:11:46.554915   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:11:46.565688   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:11:46.592998   25161 start.go:296] duration metric: took 133.773575ms for postStartSetup
	I0315 06:11:46.593055   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetConfigRaw
	I0315 06:11:46.593615   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:46.596277   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.596611   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.596638   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.596890   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:11:46.597078   25161 start.go:128] duration metric: took 27.61665701s to createHost
	I0315 06:11:46.597110   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.599568   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.599955   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.599992   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.600096   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.600293   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.600482   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.600663   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.600821   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:11:46.601009   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0315 06:11:46.601023   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:11:46.709895   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483106.697596744
	
	I0315 06:11:46.709924   25161 fix.go:216] guest clock: 1710483106.697596744
	I0315 06:11:46.709934   25161 fix.go:229] Guest: 2024-03-15 06:11:46.697596744 +0000 UTC Remote: 2024-03-15 06:11:46.597092984 +0000 UTC m=+84.595361407 (delta=100.50376ms)
	I0315 06:11:46.709953   25161 fix.go:200] guest clock delta is within tolerance: 100.50376ms
	I0315 06:11:46.709960   25161 start.go:83] releasing machines lock for "ha-866665-m02", held for 27.7296545s
	I0315 06:11:46.709986   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.710286   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:46.713347   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.713749   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.713778   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.715805   25161 out.go:177] * Found network options:
	I0315 06:11:46.717132   25161 out.go:177]   - NO_PROXY=192.168.39.78
	W0315 06:11:46.718565   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:11:46.718627   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719172   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719355   25161 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:11:46.719441   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:11:46.719478   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	W0315 06:11:46.719563   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:11:46.719627   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:11:46.719648   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:11:46.722207   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722315   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722595   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.722671   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722705   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:46.722726   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:46.722741   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.722924   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.723022   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:11:46.723085   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.723217   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:11:46.723221   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.723342   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:11:46.723456   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:11:46.962454   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:11:46.969974   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:11:46.970051   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:11:46.986944   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:11:46.986965   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:11:46.987024   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:11:47.005987   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:11:47.023015   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:11:47.023085   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:11:47.039088   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:11:47.055005   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:11:47.175129   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:11:47.329335   25161 docker.go:233] disabling docker service ...
	I0315 06:11:47.329416   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:11:47.345111   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:11:47.358569   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:11:47.495710   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:11:47.619051   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:11:47.633625   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:11:47.653527   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:11:47.653600   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.664914   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:11:47.664985   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.675987   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.688607   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:11:47.699887   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:11:47.712058   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:11:47.722345   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:11:47.722393   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:11:47.735456   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:11:47.746113   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:11:47.859069   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:11:48.009681   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:11:48.009775   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:11:48.015225   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:11:48.015290   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:11:48.019748   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:11:48.061885   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:11:48.061977   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:11:48.096436   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:11:48.127478   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:11:48.128915   25161 out.go:177]   - env NO_PROXY=192.168.39.78
	I0315 06:11:48.130076   25161 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:11:48.132961   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:48.133395   25161 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:11:34 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:11:48.133425   25161 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:11:48.133753   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:11:48.138360   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:11:48.152751   25161 mustload.go:65] Loading cluster: ha-866665
	I0315 06:11:48.152991   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:11:48.153287   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:48.153315   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:48.168153   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0315 06:11:48.168705   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:48.169170   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:48.169191   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:48.169512   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:48.169723   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:11:48.171126   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:48.171526   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:48.171550   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:48.185533   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0315 06:11:48.185946   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:48.186369   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:48.186389   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:48.186692   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:48.186873   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:48.187131   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.27
	I0315 06:11:48.187151   25161 certs.go:194] generating shared ca certs ...
	I0315 06:11:48.187169   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.187316   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:11:48.187375   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:11:48.187390   25161 certs.go:256] generating profile certs ...
	I0315 06:11:48.187530   25161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:11:48.187561   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f
	I0315 06:11:48.187573   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:11:48.439901   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f ...
	I0315 06:11:48.439953   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f: {Name:mk4b26567136aa6ff7ab4bb617e00cc8478d0fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.440346   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f ...
	I0315 06:11:48.440362   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f: {Name:mk33e05d1d83753c9e7ce4362d742df9a7045182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:11:48.440489   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.3f347a4f -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:11:48.440665   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.3f347a4f -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:11:48.440836   25161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:11:48.440854   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:11:48.440872   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:11:48.440892   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:11:48.440909   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:11:48.440925   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:11:48.440942   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:11:48.440959   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:11:48.440977   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:11:48.441046   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:11:48.441092   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:11:48.441101   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:11:48.441131   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:11:48.441160   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:11:48.441192   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:11:48.441246   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:11:48.441287   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:11:48.441308   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:48.441326   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:11:48.441361   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:48.444608   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:48.445108   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:48.445136   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:48.445313   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:48.445527   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:48.445667   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:48.445814   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:48.516883   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 06:11:48.522537   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 06:11:48.534653   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 06:11:48.539108   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 06:11:48.550662   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 06:11:48.556214   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 06:11:48.567264   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 06:11:48.571559   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0315 06:11:48.582101   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 06:11:48.586153   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 06:11:48.596016   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 06:11:48.599838   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0315 06:11:48.609654   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:11:48.636199   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:11:48.661419   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:11:48.687348   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:11:48.715380   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 06:11:48.740315   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:11:48.765710   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:11:48.793180   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:11:48.818824   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:11:48.843675   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:11:48.867791   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:11:48.892538   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 06:11:48.910145   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 06:11:48.927330   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 06:11:48.944720   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0315 06:11:48.962302   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 06:11:48.981248   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0315 06:11:49.000223   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 06:11:49.020279   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:11:49.026448   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:11:49.039683   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.044357   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.044408   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:11:49.050433   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:11:49.064150   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:11:49.077966   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.083512   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.083575   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:11:49.089653   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:11:49.102055   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:11:49.114119   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.118843   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.118901   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:11:49.124809   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:11:49.136983   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:11:49.141295   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:11:49.141350   25161 kubeadm.go:928] updating node {m02 192.168.39.27 8443 v1.28.4 crio true true} ...
	I0315 06:11:49.141446   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:11:49.141470   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:11:49.141497   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:11:49.160734   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:11:49.160794   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:11:49.160844   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:11:49.171655   25161 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 06:11:49.171703   25161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 06:11:49.182048   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 06:11:49.182079   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:11:49.182157   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:11:49.182203   25161 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0315 06:11:49.182159   25161 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0315 06:11:49.187331   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 06:11:49.187360   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 06:11:50.311616   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:11:50.329163   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:11:50.329314   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:11:50.334183   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 06:11:50.334229   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 06:11:56.954032   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:11:56.954128   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:11:56.959313   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 06:11:56.959348   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 06:11:57.207604   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 06:11:57.218204   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 06:11:57.235730   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:11:57.252913   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:11:57.270062   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:11:57.274487   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:11:57.286677   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:11:57.426308   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:11:57.444974   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:11:57.445449   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:11:57.445488   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:11:57.460080   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0315 06:11:57.460532   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:11:57.460957   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:11:57.460974   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:11:57.461376   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:11:57.461625   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:11:57.461800   25161 start.go:316] joinCluster: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:11:57.461917   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 06:11:57.461935   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:11:57.464992   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:57.465490   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:11:57.465517   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:11:57.465709   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:11:57.465895   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:11:57.466114   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:11:57.466266   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:11:57.635488   25161 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:11:57.635545   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d82o6a.5k3xjxfj0ny7by1z --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I0315 06:12:37.437149   25161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d82o6a.5k3xjxfj0ny7by1z --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (39.801581492s)
	I0315 06:12:37.437183   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 06:12:37.893523   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m02 minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false
	I0315 06:12:38.000064   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-866665-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 06:12:38.117574   25161 start.go:318] duration metric: took 40.655767484s to joinCluster
	I0315 06:12:38.117651   25161 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:12:38.119216   25161 out.go:177] * Verifying Kubernetes components...
	I0315 06:12:38.117888   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:12:38.120439   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:12:38.282643   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:12:38.299969   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:12:38.300252   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 06:12:38.300331   25161 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.78:8443
	I0315 06:12:38.300534   25161 node_ready.go:35] waiting up to 6m0s for node "ha-866665-m02" to be "Ready" ...
	I0315 06:12:38.300616   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:38.300624   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:38.300631   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:38.300635   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:38.310858   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:12:38.801454   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:38.801482   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:38.801493   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:38.801498   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:38.805250   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:39.301423   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:39.301446   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:39.301459   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:39.301465   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:39.305185   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:39.801156   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:39.801178   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:39.801185   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:39.801190   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:39.805670   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:40.301692   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:40.301714   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:40.301726   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:40.301732   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:40.305762   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:40.306565   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:40.801762   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:40.801785   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:40.801796   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:40.801800   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:40.807075   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:41.301728   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:41.301749   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:41.301757   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:41.301761   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:41.305174   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:41.801238   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:41.801267   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:41.801278   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:41.801284   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:41.804969   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:42.300804   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:42.300824   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:42.300831   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:42.300836   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:42.305441   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:42.306636   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:42.801494   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:42.801526   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:42.801533   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:42.801537   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:42.805306   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:43.301268   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:43.301289   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:43.301297   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:43.301301   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:43.305499   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:43.801380   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:43.801400   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:43.801408   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:43.801419   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:43.806326   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.301704   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:44.301727   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:44.301735   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:44.301741   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:44.305934   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.801016   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:44.801040   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:44.801047   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:44.801052   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:44.805913   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:44.806538   25161 node_ready.go:53] node "ha-866665-m02" has status "Ready":"False"
	I0315 06:12:45.301701   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:45.301777   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:45.301793   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:45.301806   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:45.307725   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:45.801737   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:45.801759   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:45.801770   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:45.801776   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:45.807657   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:46.301709   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:46.301733   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:46.301742   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:46.301748   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:46.308832   25161 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0315 06:12:46.800901   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:46.800930   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:46.800953   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:46.800962   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:46.804590   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.301026   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:47.301061   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.301074   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.301084   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.304330   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.305259   25161 node_ready.go:49] node "ha-866665-m02" has status "Ready":"True"
	I0315 06:12:47.305282   25161 node_ready.go:38] duration metric: took 9.004730208s for node "ha-866665-m02" to be "Ready" ...
	I0315 06:12:47.305294   25161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:12:47.305371   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:47.305385   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.305396   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.305403   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.311117   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:47.317728   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.317807   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mgthb
	I0315 06:12:47.317820   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.317829   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.317836   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.320636   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.321240   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.321255   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.321262   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.321265   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.323898   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.324532   25161 pod_ready.go:92] pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.324548   25161 pod_ready.go:81] duration metric: took 6.79959ms for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.324556   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.324600   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r57px
	I0315 06:12:47.324607   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.324614   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.324619   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.327370   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.328092   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.328108   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.328117   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.328122   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.330755   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.331524   25161 pod_ready.go:92] pod "coredns-5dd5756b68-r57px" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.331539   25161 pod_ready.go:81] duration metric: took 6.977272ms for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.331546   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.331600   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665
	I0315 06:12:47.331612   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.331620   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.331625   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.334533   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.335071   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.335082   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.335087   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.335091   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.337345   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.337873   25161 pod_ready.go:92] pod "etcd-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.337885   25161 pod_ready.go:81] duration metric: took 6.334392ms for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.337892   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.337928   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m02
	I0315 06:12:47.337935   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.337942   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.337946   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.340522   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:12:47.341110   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:47.341123   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.341131   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.341136   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.344723   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.345392   25161 pod_ready.go:92] pod "etcd-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.345404   25161 pod_ready.go:81] duration metric: took 7.506484ms for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.345416   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.502029   25161 request.go:629] Waited for 156.551918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:12:47.502079   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:12:47.502086   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.502096   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.502105   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.505512   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.701509   25161 request.go:629] Waited for 195.358809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.701574   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:47.701586   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.701597   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.701605   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.705391   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:47.705935   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:47.705952   25161 pod_ready.go:81] duration metric: took 360.530863ms for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.705962   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:47.901162   25161 request.go:629] Waited for 195.120234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:12:47.901229   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:12:47.901233   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:47.901240   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:47.901243   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:47.904715   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.101650   25161 request.go:629] Waited for 196.22571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.101726   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.101733   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.101744   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.101759   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.105495   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.105945   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.105966   25161 pod_ready.go:81] duration metric: took 399.998423ms for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.105975   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.302123   25161 request.go:629] Waited for 196.080349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:12:48.302232   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:12:48.302243   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.302250   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.302254   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.306075   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.501855   25161 request.go:629] Waited for 195.154281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:48.501923   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:48.501928   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.501936   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.501942   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.506180   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:48.506886   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.506904   25161 pod_ready.go:81] duration metric: took 400.923624ms for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.506914   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.702021   25161 request.go:629] Waited for 195.031498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:12:48.702078   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:12:48.702083   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.702091   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.702095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.705692   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.901653   25161 request.go:629] Waited for 195.17366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.901712   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:48.901718   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:48.901726   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:48.901729   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:48.905124   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:48.905675   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:48.905693   25161 pod_ready.go:81] duration metric: took 398.773812ms for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:48.905702   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.101312   25161 request.go:629] Waited for 195.556427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:12:49.101369   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:12:49.101374   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.101381   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.101384   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.105639   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:49.301905   25161 request.go:629] Waited for 195.292907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:49.301953   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:49.301958   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.301966   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.301970   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.305525   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.306205   25161 pod_ready.go:92] pod "kube-proxy-lqzk8" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:49.306233   25161 pod_ready.go:81] duration metric: took 400.522917ms for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.306245   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.501418   25161 request.go:629] Waited for 195.105502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:12:49.501493   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:12:49.501506   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.501517   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.501527   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.505178   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.701493   25161 request.go:629] Waited for 195.378076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:49.701573   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:49.701581   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.701592   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.701596   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.705281   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:49.705957   25161 pod_ready.go:92] pod "kube-proxy-sbxgg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:49.705978   25161 pod_ready.go:81] duration metric: took 399.7239ms for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.705991   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:49.901028   25161 request.go:629] Waited for 194.979548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:12:49.901083   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:12:49.901094   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:49.901113   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:49.901124   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:49.904875   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.101657   25161 request.go:629] Waited for 196.275103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:50.101737   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:12:50.101745   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.101755   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.101771   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.105770   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.106333   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:50.106352   25161 pod_ready.go:81] duration metric: took 400.352693ms for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.106365   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.301527   25161 request.go:629] Waited for 195.083975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:12:50.301585   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:12:50.301590   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.301597   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.301601   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.305765   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:50.501889   25161 request.go:629] Waited for 195.380466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:50.501943   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:12:50.501950   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.501957   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.501968   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.508595   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:12:50.510198   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:12:50.510218   25161 pod_ready.go:81] duration metric: took 403.844299ms for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:12:50.510228   25161 pod_ready.go:38] duration metric: took 3.204921641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:12:50.510243   25161 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:12:50.510297   25161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:12:50.525226   25161 api_server.go:72] duration metric: took 12.407537134s to wait for apiserver process to appear ...
	I0315 06:12:50.525257   25161 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:12:50.525278   25161 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0315 06:12:50.531827   25161 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0315 06:12:50.531886   25161 round_trippers.go:463] GET https://192.168.39.78:8443/version
	I0315 06:12:50.531891   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.531899   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.531904   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.533184   25161 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 06:12:50.533279   25161 api_server.go:141] control plane version: v1.28.4
	I0315 06:12:50.533300   25161 api_server.go:131] duration metric: took 8.036289ms to wait for apiserver health ...
	I0315 06:12:50.533307   25161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:12:50.701639   25161 request.go:629] Waited for 168.269401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:50.701695   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:50.701702   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.701712   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.701721   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.707893   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:12:50.712112   25161 system_pods.go:59] 17 kube-system pods found
	I0315 06:12:50.712143   25161 system_pods.go:61] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:12:50.712149   25161 system_pods.go:61] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:12:50.712154   25161 system_pods.go:61] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:12:50.712159   25161 system_pods.go:61] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:12:50.712163   25161 system_pods.go:61] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:12:50.712168   25161 system_pods.go:61] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:12:50.712173   25161 system_pods.go:61] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:12:50.712178   25161 system_pods.go:61] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:12:50.712183   25161 system_pods.go:61] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:12:50.712189   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:12:50.712197   25161 system_pods.go:61] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:12:50.712203   25161 system_pods.go:61] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:12:50.712212   25161 system_pods.go:61] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:12:50.712217   25161 system_pods.go:61] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:12:50.712225   25161 system_pods.go:61] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:12:50.712229   25161 system_pods.go:61] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:12:50.712233   25161 system_pods.go:61] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:12:50.712241   25161 system_pods.go:74] duration metric: took 178.928299ms to wait for pod list to return data ...
	I0315 06:12:50.712257   25161 default_sa.go:34] waiting for default service account to be created ...
	I0315 06:12:50.901688   25161 request.go:629] Waited for 189.357264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:12:50.901760   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:12:50.901767   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:50.901774   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:50.901779   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:50.905542   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:12:50.905747   25161 default_sa.go:45] found service account: "default"
	I0315 06:12:50.905766   25161 default_sa.go:55] duration metric: took 193.501058ms for default service account to be created ...
	I0315 06:12:50.905776   25161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 06:12:51.101142   25161 request.go:629] Waited for 195.290804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:51.101193   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:12:51.101200   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:51.101209   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:51.101218   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:51.106594   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:12:51.111129   25161 system_pods.go:86] 17 kube-system pods found
	I0315 06:12:51.111156   25161 system_pods.go:89] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:12:51.111163   25161 system_pods.go:89] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:12:51.111169   25161 system_pods.go:89] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:12:51.111175   25161 system_pods.go:89] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:12:51.111181   25161 system_pods.go:89] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:12:51.111187   25161 system_pods.go:89] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:12:51.111193   25161 system_pods.go:89] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:12:51.111200   25161 system_pods.go:89] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:12:51.111206   25161 system_pods.go:89] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:12:51.111220   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:12:51.111236   25161 system_pods.go:89] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:12:51.111245   25161 system_pods.go:89] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:12:51.111253   25161 system_pods.go:89] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:12:51.111262   25161 system_pods.go:89] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:12:51.111269   25161 system_pods.go:89] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:12:51.111279   25161 system_pods.go:89] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:12:51.111285   25161 system_pods.go:89] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:12:51.111297   25161 system_pods.go:126] duration metric: took 205.514134ms to wait for k8s-apps to be running ...
	I0315 06:12:51.111311   25161 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 06:12:51.111363   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:12:51.128999   25161 system_svc.go:56] duration metric: took 17.683933ms WaitForService to wait for kubelet
	I0315 06:12:51.129024   25161 kubeadm.go:576] duration metric: took 13.01133885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:12:51.129040   25161 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:12:51.301469   25161 request.go:629] Waited for 172.362621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes
	I0315 06:12:51.301556   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes
	I0315 06:12:51.301562   25161 round_trippers.go:469] Request Headers:
	I0315 06:12:51.301570   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:12:51.301577   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:12:51.305944   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:12:51.306624   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:12:51.306647   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:12:51.306657   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:12:51.306661   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:12:51.306666   25161 node_conditions.go:105] duration metric: took 177.621595ms to run NodePressure ...
	I0315 06:12:51.306683   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:12:51.306706   25161 start.go:254] writing updated cluster config ...
	I0315 06:12:51.309068   25161 out.go:177] 
	I0315 06:12:51.310799   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:12:51.310895   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:12:51.312662   25161 out.go:177] * Starting "ha-866665-m03" control-plane node in "ha-866665" cluster
	I0315 06:12:51.313873   25161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:12:51.313891   25161 cache.go:56] Caching tarball of preloaded images
	I0315 06:12:51.313994   25161 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:12:51.314007   25161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:12:51.314110   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:12:51.314268   25161 start.go:360] acquireMachinesLock for ha-866665-m03: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:12:51.314311   25161 start.go:364] duration metric: took 24.232µs to acquireMachinesLock for "ha-866665-m03"
	I0315 06:12:51.314334   25161 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:12:51.314439   25161 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0315 06:12:51.315981   25161 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:12:51.316063   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:12:51.316089   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:12:51.331141   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0315 06:12:51.331538   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:12:51.332014   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:12:51.332036   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:12:51.332346   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:12:51.332539   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:12:51.332703   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:12:51.332943   25161 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:12:51.332970   25161 client.go:168] LocalClient.Create starting
	I0315 06:12:51.333029   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:12:51.333060   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:12:51.333074   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:12:51.333141   25161 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:12:51.333158   25161 main.go:141] libmachine: Decoding PEM data...
	I0315 06:12:51.333172   25161 main.go:141] libmachine: Parsing certificate...
	I0315 06:12:51.333188   25161 main.go:141] libmachine: Running pre-create checks...
	I0315 06:12:51.333196   25161 main.go:141] libmachine: (ha-866665-m03) Calling .PreCreateCheck
	I0315 06:12:51.333400   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:12:51.333796   25161 main.go:141] libmachine: Creating machine...
	I0315 06:12:51.333811   25161 main.go:141] libmachine: (ha-866665-m03) Calling .Create
	I0315 06:12:51.333947   25161 main.go:141] libmachine: (ha-866665-m03) Creating KVM machine...
	I0315 06:12:51.335286   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found existing default KVM network
	I0315 06:12:51.335475   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found existing private KVM network mk-ha-866665
	I0315 06:12:51.335613   25161 main.go:141] libmachine: (ha-866665-m03) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 ...
	I0315 06:12:51.335663   25161 main.go:141] libmachine: (ha-866665-m03) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:12:51.335739   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.335629   25860 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:12:51.335847   25161 main.go:141] libmachine: (ha-866665-m03) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:12:51.562090   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.561964   25860 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa...
	I0315 06:12:51.780631   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.780514   25860 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/ha-866665-m03.rawdisk...
	I0315 06:12:51.780658   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Writing magic tar header
	I0315 06:12:51.780668   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Writing SSH key tar header
	I0315 06:12:51.780676   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:51.780648   25860 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 ...
	I0315 06:12:51.780777   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03
	I0315 06:12:51.780796   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:12:51.780804   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03 (perms=drwx------)
	I0315 06:12:51.780814   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:12:51.780828   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:12:51.780844   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:12:51.780857   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:12:51.780874   25161 main.go:141] libmachine: (ha-866665-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:12:51.780892   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:12:51.780904   25161 main.go:141] libmachine: (ha-866665-m03) Creating domain...
	I0315 06:12:51.780922   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:12:51.780939   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:12:51.780953   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:12:51.780961   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Checking permissions on dir: /home
	I0315 06:12:51.780972   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Skipping /home - not owner
	I0315 06:12:51.781798   25161 main.go:141] libmachine: (ha-866665-m03) define libvirt domain using xml: 
	I0315 06:12:51.781829   25161 main.go:141] libmachine: (ha-866665-m03) <domain type='kvm'>
	I0315 06:12:51.781840   25161 main.go:141] libmachine: (ha-866665-m03)   <name>ha-866665-m03</name>
	I0315 06:12:51.781850   25161 main.go:141] libmachine: (ha-866665-m03)   <memory unit='MiB'>2200</memory>
	I0315 06:12:51.781861   25161 main.go:141] libmachine: (ha-866665-m03)   <vcpu>2</vcpu>
	I0315 06:12:51.781877   25161 main.go:141] libmachine: (ha-866665-m03)   <features>
	I0315 06:12:51.781890   25161 main.go:141] libmachine: (ha-866665-m03)     <acpi/>
	I0315 06:12:51.781901   25161 main.go:141] libmachine: (ha-866665-m03)     <apic/>
	I0315 06:12:51.781911   25161 main.go:141] libmachine: (ha-866665-m03)     <pae/>
	I0315 06:12:51.781921   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.781931   25161 main.go:141] libmachine: (ha-866665-m03)   </features>
	I0315 06:12:51.781943   25161 main.go:141] libmachine: (ha-866665-m03)   <cpu mode='host-passthrough'>
	I0315 06:12:51.781955   25161 main.go:141] libmachine: (ha-866665-m03)   
	I0315 06:12:51.781971   25161 main.go:141] libmachine: (ha-866665-m03)   </cpu>
	I0315 06:12:51.782001   25161 main.go:141] libmachine: (ha-866665-m03)   <os>
	I0315 06:12:51.782025   25161 main.go:141] libmachine: (ha-866665-m03)     <type>hvm</type>
	I0315 06:12:51.782047   25161 main.go:141] libmachine: (ha-866665-m03)     <boot dev='cdrom'/>
	I0315 06:12:51.782058   25161 main.go:141] libmachine: (ha-866665-m03)     <boot dev='hd'/>
	I0315 06:12:51.782067   25161 main.go:141] libmachine: (ha-866665-m03)     <bootmenu enable='no'/>
	I0315 06:12:51.782078   25161 main.go:141] libmachine: (ha-866665-m03)   </os>
	I0315 06:12:51.782099   25161 main.go:141] libmachine: (ha-866665-m03)   <devices>
	I0315 06:12:51.782118   25161 main.go:141] libmachine: (ha-866665-m03)     <disk type='file' device='cdrom'>
	I0315 06:12:51.782137   25161 main.go:141] libmachine: (ha-866665-m03)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/boot2docker.iso'/>
	I0315 06:12:51.782149   25161 main.go:141] libmachine: (ha-866665-m03)       <target dev='hdc' bus='scsi'/>
	I0315 06:12:51.782163   25161 main.go:141] libmachine: (ha-866665-m03)       <readonly/>
	I0315 06:12:51.782178   25161 main.go:141] libmachine: (ha-866665-m03)     </disk>
	I0315 06:12:51.782190   25161 main.go:141] libmachine: (ha-866665-m03)     <disk type='file' device='disk'>
	I0315 06:12:51.782204   25161 main.go:141] libmachine: (ha-866665-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:12:51.782221   25161 main.go:141] libmachine: (ha-866665-m03)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/ha-866665-m03.rawdisk'/>
	I0315 06:12:51.782233   25161 main.go:141] libmachine: (ha-866665-m03)       <target dev='hda' bus='virtio'/>
	I0315 06:12:51.782246   25161 main.go:141] libmachine: (ha-866665-m03)     </disk>
	I0315 06:12:51.782258   25161 main.go:141] libmachine: (ha-866665-m03)     <interface type='network'>
	I0315 06:12:51.782293   25161 main.go:141] libmachine: (ha-866665-m03)       <source network='mk-ha-866665'/>
	I0315 06:12:51.782318   25161 main.go:141] libmachine: (ha-866665-m03)       <model type='virtio'/>
	I0315 06:12:51.782351   25161 main.go:141] libmachine: (ha-866665-m03)     </interface>
	I0315 06:12:51.782380   25161 main.go:141] libmachine: (ha-866665-m03)     <interface type='network'>
	I0315 06:12:51.782389   25161 main.go:141] libmachine: (ha-866665-m03)       <source network='default'/>
	I0315 06:12:51.782397   25161 main.go:141] libmachine: (ha-866665-m03)       <model type='virtio'/>
	I0315 06:12:51.782403   25161 main.go:141] libmachine: (ha-866665-m03)     </interface>
	I0315 06:12:51.782410   25161 main.go:141] libmachine: (ha-866665-m03)     <serial type='pty'>
	I0315 06:12:51.782415   25161 main.go:141] libmachine: (ha-866665-m03)       <target port='0'/>
	I0315 06:12:51.782422   25161 main.go:141] libmachine: (ha-866665-m03)     </serial>
	I0315 06:12:51.782428   25161 main.go:141] libmachine: (ha-866665-m03)     <console type='pty'>
	I0315 06:12:51.782435   25161 main.go:141] libmachine: (ha-866665-m03)       <target type='serial' port='0'/>
	I0315 06:12:51.782440   25161 main.go:141] libmachine: (ha-866665-m03)     </console>
	I0315 06:12:51.782450   25161 main.go:141] libmachine: (ha-866665-m03)     <rng model='virtio'>
	I0315 06:12:51.782457   25161 main.go:141] libmachine: (ha-866665-m03)       <backend model='random'>/dev/random</backend>
	I0315 06:12:51.782467   25161 main.go:141] libmachine: (ha-866665-m03)     </rng>
	I0315 06:12:51.782473   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.782481   25161 main.go:141] libmachine: (ha-866665-m03)     
	I0315 06:12:51.782499   25161 main.go:141] libmachine: (ha-866665-m03)   </devices>
	I0315 06:12:51.782515   25161 main.go:141] libmachine: (ha-866665-m03) </domain>
	I0315 06:12:51.782530   25161 main.go:141] libmachine: (ha-866665-m03) 
	I0315 06:12:51.789529   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:31:f3:07 in network default
	I0315 06:12:51.790092   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring networks are active...
	I0315 06:12:51.790112   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:51.790878   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring network default is active
	I0315 06:12:51.791231   25161 main.go:141] libmachine: (ha-866665-m03) Ensuring network mk-ha-866665 is active
	I0315 06:12:51.791565   25161 main.go:141] libmachine: (ha-866665-m03) Getting domain xml...
	I0315 06:12:51.792423   25161 main.go:141] libmachine: (ha-866665-m03) Creating domain...
	I0315 06:12:53.035150   25161 main.go:141] libmachine: (ha-866665-m03) Waiting to get IP...
	I0315 06:12:53.036020   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.036527   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.036579   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.036522   25860 retry.go:31] will retry after 298.311457ms: waiting for machine to come up
	I0315 06:12:53.336016   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.336500   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.336523   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.336440   25860 retry.go:31] will retry after 281.788443ms: waiting for machine to come up
	I0315 06:12:53.620158   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.620721   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.620757   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.620683   25860 retry.go:31] will retry after 323.523218ms: waiting for machine to come up
	I0315 06:12:53.946180   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:53.946609   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:53.946643   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:53.946564   25860 retry.go:31] will retry after 451.748742ms: waiting for machine to come up
	I0315 06:12:54.400183   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:54.400665   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:54.400694   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:54.400619   25860 retry.go:31] will retry after 691.034866ms: waiting for machine to come up
	I0315 06:12:55.093354   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:55.093808   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:55.093835   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:55.093767   25860 retry.go:31] will retry after 634.767961ms: waiting for machine to come up
	I0315 06:12:55.729919   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:55.730365   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:55.730409   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:55.730308   25860 retry.go:31] will retry after 874.474327ms: waiting for machine to come up
	I0315 06:12:56.606554   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:56.606937   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:56.606965   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:56.606882   25860 retry.go:31] will retry after 1.259625025s: waiting for machine to come up
	I0315 06:12:57.868160   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:57.868623   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:57.868653   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:57.868582   25860 retry.go:31] will retry after 1.730370758s: waiting for machine to come up
	I0315 06:12:59.601624   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:12:59.602133   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:12:59.602158   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:12:59.602095   25860 retry.go:31] will retry after 1.898634494s: waiting for machine to come up
	I0315 06:13:01.502182   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:01.502681   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:01.502709   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:01.502645   25860 retry.go:31] will retry after 2.001541934s: waiting for machine to come up
	I0315 06:13:03.505961   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:03.506334   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:03.506363   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:03.506283   25860 retry.go:31] will retry after 2.795851868s: waiting for machine to come up
	I0315 06:13:06.305236   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:06.305602   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:06.305619   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:06.305587   25860 retry.go:31] will retry after 4.303060634s: waiting for machine to come up
	I0315 06:13:10.609875   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:10.610290   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find current IP address of domain ha-866665-m03 in network mk-ha-866665
	I0315 06:13:10.610311   25161 main.go:141] libmachine: (ha-866665-m03) DBG | I0315 06:13:10.610255   25860 retry.go:31] will retry after 5.533964577s: waiting for machine to come up
	I0315 06:13:16.145959   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.146672   25161 main.go:141] libmachine: (ha-866665-m03) Found IP for machine: 192.168.39.89
	I0315 06:13:16.146704   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has current primary IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.146713   25161 main.go:141] libmachine: (ha-866665-m03) Reserving static IP address...
	I0315 06:13:16.147097   25161 main.go:141] libmachine: (ha-866665-m03) DBG | unable to find host DHCP lease matching {name: "ha-866665-m03", mac: "52:54:00:76:48:bb", ip: "192.168.39.89"} in network mk-ha-866665
	I0315 06:13:16.224039   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Getting to WaitForSSH function...
	I0315 06:13:16.224069   25161 main.go:141] libmachine: (ha-866665-m03) Reserved static IP address: 192.168.39.89
	I0315 06:13:16.224081   25161 main.go:141] libmachine: (ha-866665-m03) Waiting for SSH to be available...
	I0315 06:13:16.227293   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.227831   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.227861   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.228100   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using SSH client type: external
	I0315 06:13:16.228126   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa (-rw-------)
	I0315 06:13:16.228153   25161 main.go:141] libmachine: (ha-866665-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:13:16.228167   25161 main.go:141] libmachine: (ha-866665-m03) DBG | About to run SSH command:
	I0315 06:13:16.228182   25161 main.go:141] libmachine: (ha-866665-m03) DBG | exit 0
	I0315 06:13:16.360633   25161 main.go:141] libmachine: (ha-866665-m03) DBG | SSH cmd err, output: <nil>: 
	I0315 06:13:16.360894   25161 main.go:141] libmachine: (ha-866665-m03) KVM machine creation complete!
	I0315 06:13:16.361233   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:13:16.361739   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:16.361905   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:16.362037   25161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:13:16.362079   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:13:16.363397   25161 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:13:16.363414   25161 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:13:16.363421   25161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:13:16.363427   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.365926   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.366337   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.366369   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.366516   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.366712   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.366872   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.367008   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.367121   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.367391   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.367404   25161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:13:16.483839   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:13:16.483866   25161 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:13:16.483876   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.486968   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.487349   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.487372   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.487482   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.487675   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.487823   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.487996   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.488192   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.488353   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.488365   25161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:13:16.605506   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:13:16.605595   25161 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:13:16.605610   25161 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:13:16.605622   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.605919   25161 buildroot.go:166] provisioning hostname "ha-866665-m03"
	I0315 06:13:16.605947   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.606123   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.608659   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.609100   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.609137   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.609194   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.609394   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.609567   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.609731   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.609910   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.610068   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.610079   25161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665-m03 && echo "ha-866665-m03" | sudo tee /etc/hostname
	I0315 06:13:16.741484   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665-m03
	
	I0315 06:13:16.741514   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.744403   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.744887   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.744916   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.745131   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:16.745316   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.745462   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:16.745600   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:16.745780   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:16.745948   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:16.745968   25161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:13:16.872038   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:13:16.872077   25161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:13:16.872093   25161 buildroot.go:174] setting up certificates
	I0315 06:13:16.872103   25161 provision.go:84] configureAuth start
	I0315 06:13:16.872112   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetMachineName
	I0315 06:13:16.872366   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:16.875149   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.875549   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.875578   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.875796   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:16.878408   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.878796   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:16.878826   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:16.878959   25161 provision.go:143] copyHostCerts
	I0315 06:13:16.878989   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:13:16.879030   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:13:16.879051   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:13:16.879133   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:13:16.879263   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:13:16.879290   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:13:16.879300   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:13:16.879348   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:13:16.879447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:13:16.879474   25161 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:13:16.879480   25161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:13:16.879515   25161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:13:16.879611   25161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665-m03 san=[127.0.0.1 192.168.39.89 ha-866665-m03 localhost minikube]
	I0315 06:13:17.071846   25161 provision.go:177] copyRemoteCerts
	I0315 06:13:17.071907   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:13:17.071930   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.074848   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.075190   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.075220   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.075462   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.075687   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.075843   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.075966   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.162763   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:13:17.162827   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:13:17.189144   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:13:17.189229   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 06:13:17.217003   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:13:17.217064   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:13:17.243067   25161 provision.go:87] duration metric: took 370.952795ms to configureAuth
	I0315 06:13:17.243129   25161 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:13:17.243358   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:17.243439   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.246118   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.246494   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.246529   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.246689   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.246863   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.247008   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.247186   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.247353   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:17.247503   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:17.247518   25161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:13:17.548364   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:13:17.548399   25161 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:13:17.548411   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetURL
	I0315 06:13:17.549886   25161 main.go:141] libmachine: (ha-866665-m03) DBG | Using libvirt version 6000000
	I0315 06:13:17.552092   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.552605   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.552634   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.552775   25161 main.go:141] libmachine: Docker is up and running!
	I0315 06:13:17.552787   25161 main.go:141] libmachine: Reticulating splines...
	I0315 06:13:17.552793   25161 client.go:171] duration metric: took 26.219813183s to LocalClient.Create
	I0315 06:13:17.552814   25161 start.go:167] duration metric: took 26.21987276s to libmachine.API.Create "ha-866665"
	I0315 06:13:17.552827   25161 start.go:293] postStartSetup for "ha-866665-m03" (driver="kvm2")
	I0315 06:13:17.552840   25161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:13:17.552860   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.553089   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:13:17.553112   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.555406   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.555833   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.555863   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.555982   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.556159   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.556331   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.556487   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.645620   25161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:13:17.650150   25161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:13:17.650175   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:13:17.650269   25161 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:13:17.650361   25161 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:13:17.650373   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:13:17.650473   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:13:17.660972   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:13:17.686274   25161 start.go:296] duration metric: took 133.43279ms for postStartSetup
	I0315 06:13:17.686339   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetConfigRaw
	I0315 06:13:17.686914   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:17.690246   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.690732   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.690768   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.691087   25161 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:13:17.691296   25161 start.go:128] duration metric: took 26.376846774s to createHost
	I0315 06:13:17.691321   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.693732   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.694136   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.694167   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.694333   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.694484   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.694662   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.694810   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.694986   25161 main.go:141] libmachine: Using SSH client type: native
	I0315 06:13:17.695155   25161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0315 06:13:17.695166   25161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:13:17.817650   25161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483197.793014240
	
	I0315 06:13:17.817676   25161 fix.go:216] guest clock: 1710483197.793014240
	I0315 06:13:17.817686   25161 fix.go:229] Guest: 2024-03-15 06:13:17.79301424 +0000 UTC Remote: 2024-03-15 06:13:17.691310036 +0000 UTC m=+175.689578469 (delta=101.704204ms)
	I0315 06:13:17.817709   25161 fix.go:200] guest clock delta is within tolerance: 101.704204ms
	I0315 06:13:17.817717   25161 start.go:83] releasing machines lock for "ha-866665-m03", held for 26.503394445s
	I0315 06:13:17.817741   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.818005   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:17.820569   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.820956   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.820993   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.823575   25161 out.go:177] * Found network options:
	I0315 06:13:17.825308   25161 out.go:177]   - NO_PROXY=192.168.39.78,192.168.39.27
	W0315 06:13:17.826923   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 06:13:17.826942   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:13:17.826955   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827544   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827752   25161 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:13:17.827852   25161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:13:17.827888   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	W0315 06:13:17.827969   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 06:13:17.827994   25161 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 06:13:17.828056   25161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:13:17.828078   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:13:17.830849   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.830955   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831208   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.831246   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831393   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.831503   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:17.831527   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:17.831563   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.831758   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.831760   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:13:17.831966   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:13:17.831955   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:17.832132   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:13:17.832324   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:13:18.085787   25161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:13:18.092348   25161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:13:18.092432   25161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:13:18.110796   25161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:13:18.110825   25161 start.go:494] detecting cgroup driver to use...
	I0315 06:13:18.110906   25161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:13:18.130014   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:13:18.144546   25161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:13:18.144603   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:13:18.160376   25161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:13:18.175139   25161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:13:18.307170   25161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:13:18.480533   25161 docker.go:233] disabling docker service ...
	I0315 06:13:18.480607   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:13:18.496871   25161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:13:18.512932   25161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:13:18.652631   25161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:13:18.784108   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:13:18.799682   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:13:18.821219   25161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:13:18.821290   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.832880   25161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:13:18.832951   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.844364   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.855802   25161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:13:18.868166   25161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:13:18.879160   25161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:13:18.889700   25161 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:13:18.889769   25161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:13:18.905254   25161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:13:18.916136   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:19.062538   25161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:13:19.219783   25161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:13:19.219860   25161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:13:19.225963   25161 start.go:562] Will wait 60s for crictl version
	I0315 06:13:19.226038   25161 ssh_runner.go:195] Run: which crictl
	I0315 06:13:19.230678   25161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:13:19.271407   25161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:13:19.271485   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:13:19.304639   25161 ssh_runner.go:195] Run: crio --version
	I0315 06:13:19.343075   25161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:13:19.344917   25161 out.go:177]   - env NO_PROXY=192.168.39.78
	I0315 06:13:19.346592   25161 out.go:177]   - env NO_PROXY=192.168.39.78,192.168.39.27
	I0315 06:13:19.348317   25161 main.go:141] libmachine: (ha-866665-m03) Calling .GetIP
	I0315 06:13:19.351550   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:19.351969   25161 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:13:19.352005   25161 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:13:19.352278   25161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:13:19.357181   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:13:19.371250   25161 mustload.go:65] Loading cluster: ha-866665
	I0315 06:13:19.371465   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:19.371703   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:19.371741   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:19.387368   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0315 06:13:19.387853   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:19.388336   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:19.388351   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:19.388758   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:19.388940   25161 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:13:19.390736   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:13:19.391070   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:19.391119   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:19.406949   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0315 06:13:19.407440   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:19.407986   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:19.408009   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:19.408382   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:19.408570   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:13:19.408770   25161 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.89
	I0315 06:13:19.408787   25161 certs.go:194] generating shared ca certs ...
	I0315 06:13:19.408804   25161 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.408959   25161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:13:19.409018   25161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:13:19.409031   25161 certs.go:256] generating profile certs ...
	I0315 06:13:19.409130   25161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:13:19.409166   25161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4
	I0315 06:13:19.409187   25161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:13:19.601873   25161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 ...
	I0315 06:13:19.601901   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4: {Name:mk3a9401e785e81d9d4b250b9aabdd54331f0925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.602059   25161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4 ...
	I0315 06:13:19.602071   25161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4: {Name:mk6d7a4285f4b6cc1db493575ebcf69c5f0eb90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:13:19.602134   25161 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b5681ae4 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:13:19.602264   25161 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b5681ae4 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:13:19.602380   25161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:13:19.602395   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:13:19.602406   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:13:19.602416   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:13:19.602425   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:13:19.602435   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:13:19.602447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:13:19.602461   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:13:19.602470   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:13:19.602530   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:13:19.602557   25161 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:13:19.602566   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:13:19.602588   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:13:19.602609   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:13:19.602631   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:13:19.602669   25161 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:13:19.602695   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:13:19.602710   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:13:19.602723   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:19.602752   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:13:19.606208   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:19.606767   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:13:19.606808   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:19.607044   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:13:19.607256   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:13:19.607383   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:13:19.607621   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:13:19.680841   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 06:13:19.686663   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 06:13:19.699918   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 06:13:19.704654   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 06:13:19.719942   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 06:13:19.724961   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 06:13:19.739220   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 06:13:19.744145   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0315 06:13:19.757712   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 06:13:19.763027   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 06:13:19.777923   25161 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 06:13:19.782472   25161 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0315 06:13:19.794362   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:13:19.822600   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:13:19.850637   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:13:19.879297   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:13:19.906629   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0315 06:13:19.933751   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:13:19.959528   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:13:19.987312   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:13:20.016093   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:13:20.046080   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:13:20.076406   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:13:20.104494   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 06:13:20.123584   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 06:13:20.143595   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 06:13:20.162301   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0315 06:13:20.182440   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 06:13:20.201422   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0315 06:13:20.222325   25161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 06:13:20.243409   25161 ssh_runner.go:195] Run: openssl version
	I0315 06:13:20.249530   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:13:20.262093   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.266970   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.267032   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:13:20.273065   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:13:20.286946   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:13:20.300302   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.305424   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.305485   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:13:20.311885   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:13:20.325415   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:13:20.339226   25161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.344845   25161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.344908   25161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:13:20.351216   25161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:13:20.365073   25161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:13:20.370323   25161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:13:20.370378   25161 kubeadm.go:928] updating node {m03 192.168.39.89 8443 v1.28.4 crio true true} ...
	I0315 06:13:20.370464   25161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:13:20.370490   25161 kube-vip.go:111] generating kube-vip config ...
	I0315 06:13:20.370536   25161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:13:20.390769   25161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:13:20.390844   25161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:13:20.390920   25161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:13:20.402252   25161 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 06:13:20.402322   25161 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 06:13:20.413609   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 06:13:20.413634   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0315 06:13:20.413641   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:13:20.413682   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:13:20.413727   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:13:20.413609   25161 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0315 06:13:20.413771   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:13:20.413860   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:13:20.418768   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 06:13:20.418804   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 06:13:20.444447   25161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:13:20.444452   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 06:13:20.444545   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 06:13:20.444585   25161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:13:20.508056   25161 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 06:13:20.508106   25161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 06:13:21.483097   25161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 06:13:21.494291   25161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 06:13:21.516613   25161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:13:21.536637   25161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:13:21.556286   25161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:13:21.561424   25161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:13:21.575899   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:21.711123   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:13:21.730533   25161 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:13:21.730862   25161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:13:21.730910   25161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:13:21.746267   25161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0315 06:13:21.746738   25161 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:13:21.747231   25161 main.go:141] libmachine: Using API Version  1
	I0315 06:13:21.747254   25161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:13:21.747637   25161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:13:21.747857   25161 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:13:21.748031   25161 start.go:316] joinCluster: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:13:21.748187   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 06:13:21.748212   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:13:21.751415   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:21.751947   25161 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:13:21.751973   25161 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:13:21.752155   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:13:21.752320   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:13:21.752515   25161 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:13:21.752676   25161 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:13:21.916601   25161 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:13:21.916650   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kr2r6t.3p96coeihyw3qpvz --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I0315 06:13:50.038289   25161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kr2r6t.3p96coeihyw3qpvz --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (28.121613009s)
	I0315 06:13:50.038330   25161 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 06:13:50.529373   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m03 minikube.k8s.io/updated_at=2024_03_15T06_13_50_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false
	I0315 06:13:50.675068   25161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-866665-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 06:13:50.783667   25161 start.go:318] duration metric: took 29.035633105s to joinCluster
	I0315 06:13:50.783744   25161 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:13:50.785272   25161 out.go:177] * Verifying Kubernetes components...
	I0315 06:13:50.784078   25161 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:13:50.786680   25161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:13:51.048820   25161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:13:51.065661   25161 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:13:51.065880   25161 kapi.go:59] client config for ha-866665: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 06:13:51.065935   25161 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.78:8443
	I0315 06:13:51.066136   25161 node_ready.go:35] waiting up to 6m0s for node "ha-866665-m03" to be "Ready" ...
	I0315 06:13:51.066208   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:51.066219   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:51.066230   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:51.066239   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:51.070343   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:51.567067   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:51.567092   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:51.567110   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:51.567115   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:51.571135   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:52.067200   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:52.067219   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:52.067227   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:52.067230   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:52.071116   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:52.567046   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:52.567068   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:52.567076   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:52.567080   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:52.571252   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:53.066954   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:53.066976   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:53.066986   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:53.066993   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:53.071221   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:53.072130   25161 node_ready.go:53] node "ha-866665-m03" has status "Ready":"False"
	I0315 06:13:53.566345   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:53.566373   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:53.566385   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:53.566392   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:53.571000   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:54.066700   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:54.066723   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:54.066731   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:54.066735   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:54.070373   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:54.566329   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:54.566354   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:54.566365   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:54.566371   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:54.571077   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:55.067093   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:55.067115   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:55.067123   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:55.067126   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:55.071034   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:55.567255   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:55.567278   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:55.567285   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:55.567290   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:55.570915   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:55.571668   25161 node_ready.go:53] node "ha-866665-m03" has status "Ready":"False"
	I0315 06:13:56.066954   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.066973   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.066981   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.066985   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.070691   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.071415   25161 node_ready.go:49] node "ha-866665-m03" has status "Ready":"True"
	I0315 06:13:56.071435   25161 node_ready.go:38] duration metric: took 5.005282027s for node "ha-866665-m03" to be "Ready" ...
	I0315 06:13:56.071444   25161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:13:56.071520   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:13:56.071532   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.071542   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.071554   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.078886   25161 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0315 06:13:56.085590   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.085671   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mgthb
	I0315 06:13:56.085680   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.085688   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.085693   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.089325   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.089998   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.090014   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.090021   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.090025   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.092988   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:13:56.093428   25161 pod_ready.go:92] pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.093444   25161 pod_ready.go:81] duration metric: took 7.831568ms for pod "coredns-5dd5756b68-mgthb" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.093453   25161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.093537   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r57px
	I0315 06:13:56.093551   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.093561   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.093568   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.096866   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.097525   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.097544   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.097555   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.097559   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.101060   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.101959   25161 pod_ready.go:92] pod "coredns-5dd5756b68-r57px" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.101978   25161 pod_ready.go:81] duration metric: took 8.51782ms for pod "coredns-5dd5756b68-r57px" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.101990   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.102051   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665
	I0315 06:13:56.102062   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.102072   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.102082   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.107567   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:56.108157   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:56.108173   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.108183   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.108187   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.112528   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.113299   25161 pod_ready.go:92] pod "etcd-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.113314   25161 pod_ready.go:81] duration metric: took 11.317379ms for pod "etcd-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.113324   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.113368   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m02
	I0315 06:13:56.113375   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.113383   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.113386   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.118160   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.119257   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:56.119272   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.119279   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.119282   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.122864   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.123414   25161 pod_ready.go:92] pod "etcd-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:56.123431   25161 pod_ready.go:81] duration metric: took 10.102076ms for pod "etcd-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.123440   25161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:56.267803   25161 request.go:629] Waited for 144.311021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.267873   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.267883   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.267891   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.267895   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.271386   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:56.467461   25161 request.go:629] Waited for 195.39417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.467526   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.467533   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.467541   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.467547   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.471981   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:56.666996   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:56.667016   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.667030   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.667039   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.672207   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:56.867209   25161 request.go:629] Waited for 194.291173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.867300   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:56.867310   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:56.867317   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:56.867325   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:56.870748   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.123654   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:57.123676   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.123684   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.123688   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.127313   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.267584   25161 request.go:629] Waited for 139.352755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.267646   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.267664   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.267671   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.267675   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.271352   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:57.623926   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:57.623948   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.623957   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.623963   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.629129   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:13:57.667927   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:57.667958   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:57.667964   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:57.667968   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:57.671784   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.123940   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:58.123962   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.123970   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.123975   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.127633   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.128261   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:58.128275   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.128281   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.128284   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.131681   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.132582   25161 pod_ready.go:102] pod "etcd-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:13:58.623697   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/etcd-ha-866665-m03
	I0315 06:13:58.623719   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.623728   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.623732   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.627712   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.628448   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:58.628480   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.628492   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.628499   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.631686   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.632296   25161 pod_ready.go:92] pod "etcd-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:58.632314   25161 pod_ready.go:81] duration metric: took 2.508868218s for pod "etcd-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.632330   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.667659   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665
	I0315 06:13:58.667681   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.667689   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.667695   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.671600   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:58.867965   25161 request.go:629] Waited for 195.346208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:58.868025   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:13:58.868031   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:58.868039   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:58.868044   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:58.872066   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:58.872619   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:58.872636   25161 pod_ready.go:81] duration metric: took 240.300208ms for pod "kube-apiserver-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:58.872645   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.066992   25161 request.go:629] Waited for 194.282943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:13:59.067065   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m02
	I0315 06:13:59.067077   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.067086   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.067095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.070872   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:59.267994   25161 request.go:629] Waited for 196.368377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:59.268061   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:13:59.268071   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.268084   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.268094   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.272096   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:13:59.272687   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:13:59.272712   25161 pod_ready.go:81] duration metric: took 400.060283ms for pod "kube-apiserver-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.272727   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:13:59.467838   25161 request.go:629] Waited for 195.03102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.467911   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.467917   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.467925   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.467930   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.472237   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:59.667365   25161 request.go:629] Waited for 194.371732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:59.667427   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:13:59.667435   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.667448   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.667454   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.671634   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:13:59.867424   25161 request.go:629] Waited for 94.276848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.867493   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:13:59.867500   25161 round_trippers.go:469] Request Headers:
	I0315 06:13:59.867510   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:13:59.867516   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:13:59.871467   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.067802   25161 request.go:629] Waited for 195.400399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.067897   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.067916   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.067926   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.067932   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.071709   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.273311   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:00.273335   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.273344   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.273348   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.278307   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:00.467629   25161 request.go:629] Waited for 188.376209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.467685   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.467691   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.467701   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.467711   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.471740   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:00.773689   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:00.773711   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.773719   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.773722   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.777511   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:00.867555   25161 request.go:629] Waited for 89.227235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.867628   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:00.867634   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:00.867641   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:00.867645   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:00.871502   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.273450   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:01.273477   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.273503   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.273510   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.277314   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.278175   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:01.278193   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.278203   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.278209   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.281480   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:01.282027   25161 pod_ready.go:102] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:14:01.773594   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:01.773616   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.773623   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.773627   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.777711   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:01.778569   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:01.778604   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:01.778614   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:01.778623   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:01.781948   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.273959   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:02.273985   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.273993   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.273998   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.277909   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.279046   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:02.279064   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.279071   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.279075   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.282065   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:14:02.773586   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:02.773607   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.773622   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.773628   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.777171   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:02.777964   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:02.777977   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:02.777984   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:02.777988   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:02.781353   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:03.273793   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:03.273816   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.273825   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.273829   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.278546   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.279438   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:03.279456   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.279467   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.279472   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.283715   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.284165   25161 pod_ready.go:102] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 06:14:03.773330   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:03.773356   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.773367   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.773373   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.777562   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:03.778583   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:03.778604   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:03.778615   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:03.778622   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:03.782127   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.273595   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665-m03
	I0315 06:14:04.273618   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.273627   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.273632   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.277682   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:04.278477   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.278510   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.278522   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.278528   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.283682   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:04.284233   25161 pod_ready.go:92] pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.284252   25161 pod_ready.go:81] duration metric: took 5.011513967s for pod "kube-apiserver-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.284261   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.284314   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665
	I0315 06:14:04.284322   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.284329   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.284333   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.287542   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.288016   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:04.288031   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.288038   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.288041   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.291184   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.291801   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.291822   25161 pod_ready.go:81] duration metric: took 7.55545ms for pod "kube-controller-manager-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.291833   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.291882   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m02
	I0315 06:14:04.291889   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.291895   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.291904   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.294962   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.467644   25161 request.go:629] Waited for 171.948514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:04.467696   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:04.467702   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.467717   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.467721   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.472005   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:04.472461   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:04.472507   25161 pod_ready.go:81] duration metric: took 180.666536ms for pod "kube-controller-manager-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.472518   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:04.667987   25161 request.go:629] Waited for 195.400575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:04.668039   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:04.668045   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.668055   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.668059   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.671954   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:04.868032   25161 request.go:629] Waited for 195.436161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.868127   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:04.868135   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:04.868147   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:04.868155   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:04.872533   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.067526   25161 request.go:629] Waited for 94.337643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:05.067591   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-866665-m03
	I0315 06:14:05.067597   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.067608   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.067613   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.071591   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:05.267695   25161 request.go:629] Waited for 195.366025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.267748   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.267759   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.267768   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.267774   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.272158   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.273061   25161 pod_ready.go:92] pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:05.273086   25161 pod_ready.go:81] duration metric: took 800.560339ms for pod "kube-controller-manager-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.273100   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wxfg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.467586   25161 request.go:629] Waited for 194.422691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wxfg
	I0315 06:14:05.467681   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6wxfg
	I0315 06:14:05.467694   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.467705   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.467717   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.471891   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:05.667940   25161 request.go:629] Waited for 195.377355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.668005   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:05.668011   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.668018   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.668024   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.674197   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:14:05.674739   25161 pod_ready.go:92] pod "kube-proxy-6wxfg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:05.674757   25161 pod_ready.go:81] duration metric: took 401.647952ms for pod "kube-proxy-6wxfg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.674769   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:05.868045   25161 request.go:629] Waited for 193.209712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:14:05.868130   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lqzk8
	I0315 06:14:05.868135   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:05.868142   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:05.868147   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:05.878231   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:14:06.067405   25161 request.go:629] Waited for 187.322806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:06.067484   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:06.067490   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.067497   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.067501   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.071957   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:06.072441   25161 pod_ready.go:92] pod "kube-proxy-lqzk8" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.072482   25161 pod_ready.go:81] duration metric: took 397.687128ms for pod "kube-proxy-lqzk8" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.072497   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.267564   25161 request.go:629] Waited for 194.989792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:14:06.267625   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sbxgg
	I0315 06:14:06.267630   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.267637   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.267642   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.271381   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:06.467810   25161 request.go:629] Waited for 195.461072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.467911   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.467925   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.467935   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.467943   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.471989   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:06.472812   25161 pod_ready.go:92] pod "kube-proxy-sbxgg" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.472831   25161 pod_ready.go:81] duration metric: took 400.326596ms for pod "kube-proxy-sbxgg" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.472843   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.666996   25161 request.go:629] Waited for 194.085115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:14:06.667074   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665
	I0315 06:14:06.667079   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.667087   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.667094   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.671048   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:06.866969   25161 request.go:629] Waited for 195.186475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.867065   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665
	I0315 06:14:06.867087   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:06.867095   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:06.867106   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:06.873323   25161 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 06:14:06.873883   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:06.873905   25161 pod_ready.go:81] duration metric: took 401.054482ms for pod "kube-scheduler-ha-866665" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:06.873915   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.067349   25161 request.go:629] Waited for 193.371689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:14:07.067423   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m02
	I0315 06:14:07.067430   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.067440   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.067447   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.071395   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:07.267670   25161 request.go:629] Waited for 195.463984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:07.267734   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02
	I0315 06:14:07.267741   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.267750   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.267757   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.271416   25161 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 06:14:07.272074   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:07.272093   25161 pod_ready.go:81] duration metric: took 398.171188ms for pod "kube-scheduler-ha-866665-m02" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.272105   25161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.467215   25161 request.go:629] Waited for 195.044748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m03
	I0315 06:14:07.467288   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-866665-m03
	I0315 06:14:07.467294   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.467302   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.467306   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.472949   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:07.666968   25161 request.go:629] Waited for 193.372356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:07.667064   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03
	I0315 06:14:07.667081   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.667091   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.667100   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.677989   25161 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 06:14:07.678508   25161 pod_ready.go:92] pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 06:14:07.678529   25161 pod_ready.go:81] duration metric: took 406.417977ms for pod "kube-scheduler-ha-866665-m03" in "kube-system" namespace to be "Ready" ...
	I0315 06:14:07.678541   25161 pod_ready.go:38] duration metric: took 11.60708612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:14:07.678556   25161 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:14:07.678636   25161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:14:07.700935   25161 api_server.go:72] duration metric: took 16.917153632s to wait for apiserver process to appear ...
	I0315 06:14:07.700961   25161 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:14:07.700984   25161 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0315 06:14:07.711901   25161 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0315 06:14:07.711991   25161 round_trippers.go:463] GET https://192.168.39.78:8443/version
	I0315 06:14:07.711998   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.712007   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.712012   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.714787   25161 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 06:14:07.714937   25161 api_server.go:141] control plane version: v1.28.4
	I0315 06:14:07.714960   25161 api_server.go:131] duration metric: took 13.992544ms to wait for apiserver health ...
	I0315 06:14:07.714969   25161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:14:07.867219   25161 request.go:629] Waited for 152.185848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:07.867277   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:07.867282   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:07.867289   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:07.867293   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:07.876492   25161 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 06:14:07.883095   25161 system_pods.go:59] 24 kube-system pods found
	I0315 06:14:07.883126   25161 system_pods.go:61] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:14:07.883132   25161 system_pods.go:61] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:14:07.883136   25161 system_pods.go:61] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:14:07.883141   25161 system_pods.go:61] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:14:07.883145   25161 system_pods.go:61] "etcd-ha-866665-m03" [20f9ca29-a258-454a-a497-22ad15f35c6d] Running
	I0315 06:14:07.883148   25161 system_pods.go:61] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:14:07.883151   25161 system_pods.go:61] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:14:07.883153   25161 system_pods.go:61] "kindnet-qr9qm" [bd816497-5a8b-4028-9fa5-d4f5739b651e] Running
	I0315 06:14:07.883156   25161 system_pods.go:61] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:14:07.883159   25161 system_pods.go:61] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:14:07.883162   25161 system_pods.go:61] "kube-apiserver-ha-866665-m03" [03abb17f-377c-422b-9e2a-2c837bafa855] Running
	I0315 06:14:07.883165   25161 system_pods.go:61] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:14:07.883168   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:14:07.883171   25161 system_pods.go:61] "kube-controller-manager-ha-866665-m03" [e09a088d-2fd3-4abb-a4d6-796ec9a94544] Running
	I0315 06:14:07.883173   25161 system_pods.go:61] "kube-proxy-6wxfg" [ee19b698-ba60-4edb-bb37-d9ca6a1793b2] Running
	I0315 06:14:07.883176   25161 system_pods.go:61] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:14:07.883178   25161 system_pods.go:61] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:14:07.883182   25161 system_pods.go:61] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:14:07.883185   25161 system_pods.go:61] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:14:07.883189   25161 system_pods.go:61] "kube-scheduler-ha-866665-m03" [9e7712b2-d794-4544-9044-6a5acf281303] Running
	I0315 06:14:07.883191   25161 system_pods.go:61] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:14:07.883195   25161 system_pods.go:61] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:14:07.883197   25161 system_pods.go:61] "kube-vip-ha-866665-m03" [73e7ac10-6df8-440e-98af-b3724499b73e] Running
	I0315 06:14:07.883200   25161 system_pods.go:61] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:14:07.883206   25161 system_pods.go:74] duration metric: took 168.231276ms to wait for pod list to return data ...
	I0315 06:14:07.883213   25161 default_sa.go:34] waiting for default service account to be created ...
	I0315 06:14:08.067727   25161 request.go:629] Waited for 184.450892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:14:08.067890   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/default/serviceaccounts
	I0315 06:14:08.067908   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.067915   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.067920   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.072178   25161 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 06:14:08.072304   25161 default_sa.go:45] found service account: "default"
	I0315 06:14:08.072323   25161 default_sa.go:55] duration metric: took 189.104157ms for default service account to be created ...
	I0315 06:14:08.072337   25161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 06:14:08.267770   25161 request.go:629] Waited for 195.367442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:08.267840   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/namespaces/kube-system/pods
	I0315 06:14:08.267846   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.267853   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.267857   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.275938   25161 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0315 06:14:08.282381   25161 system_pods.go:86] 24 kube-system pods found
	I0315 06:14:08.282413   25161 system_pods.go:89] "coredns-5dd5756b68-mgthb" [6498160a-372c-4273-a82c-b06c4b7b239b] Running
	I0315 06:14:08.282419   25161 system_pods.go:89] "coredns-5dd5756b68-r57px" [4d7f8905-5519-453d-a9a3-26b5d511f1c3] Running
	I0315 06:14:08.282424   25161 system_pods.go:89] "etcd-ha-866665" [61baaf38-7138-4f5e-a83d-b16289cb416f] Running
	I0315 06:14:08.282429   25161 system_pods.go:89] "etcd-ha-866665-m02" [3de4d2ac-4b5a-4be7-b0c3-52ed10e657e0] Running
	I0315 06:14:08.282434   25161 system_pods.go:89] "etcd-ha-866665-m03" [20f9ca29-a258-454a-a497-22ad15f35c6d] Running
	I0315 06:14:08.282438   25161 system_pods.go:89] "kindnet-26vqf" [f3ea845a-1e9e-447b-a364-d35d44770c6d] Running
	I0315 06:14:08.282442   25161 system_pods.go:89] "kindnet-9nvvx" [4c5333df-bb98-4f27-9197-875a160f4ff6] Running
	I0315 06:14:08.282445   25161 system_pods.go:89] "kindnet-qr9qm" [bd816497-5a8b-4028-9fa5-d4f5739b651e] Running
	I0315 06:14:08.282449   25161 system_pods.go:89] "kube-apiserver-ha-866665" [b359ee3f-d7f4-4545-9dd9-579be1206407] Running
	I0315 06:14:08.282453   25161 system_pods.go:89] "kube-apiserver-ha-866665-m02" [e0a082e3-bd71-4857-b03c-6b02123c2c10] Running
	I0315 06:14:08.282457   25161 system_pods.go:89] "kube-apiserver-ha-866665-m03" [03abb17f-377c-422b-9e2a-2c837bafa855] Running
	I0315 06:14:08.282461   25161 system_pods.go:89] "kube-controller-manager-ha-866665" [87edfd17-4191-433d-adc5-9368128a0d19] Running
	I0315 06:14:08.282464   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m02" [54b66465-5f6b-4ab9-9f4d-912cfa1cb0e7] Running
	I0315 06:14:08.282468   25161 system_pods.go:89] "kube-controller-manager-ha-866665-m03" [e09a088d-2fd3-4abb-a4d6-796ec9a94544] Running
	I0315 06:14:08.282472   25161 system_pods.go:89] "kube-proxy-6wxfg" [ee19b698-ba60-4edb-bb37-d9ca6a1793b2] Running
	I0315 06:14:08.282475   25161 system_pods.go:89] "kube-proxy-lqzk8" [2633ba07-71f5-4944-9f74-df1beb37377b] Running
	I0315 06:14:08.282479   25161 system_pods.go:89] "kube-proxy-sbxgg" [33fac82d-5f3a-42b8-99b7-1f4ee45c0f98] Running
	I0315 06:14:08.282482   25161 system_pods.go:89] "kube-scheduler-ha-866665" [5da1b14d-1413-4f62-9b44-45091c5d4284] Running
	I0315 06:14:08.282485   25161 system_pods.go:89] "kube-scheduler-ha-866665-m02" [66ba5c33-d34c-49af-a22a-412db91f6a60] Running
	I0315 06:14:08.282489   25161 system_pods.go:89] "kube-scheduler-ha-866665-m03" [9e7712b2-d794-4544-9044-6a5acf281303] Running
	I0315 06:14:08.282493   25161 system_pods.go:89] "kube-vip-ha-866665" [d3470d6f-a8eb-4694-b4b5-ca25415a6ce1] Running
	I0315 06:14:08.282496   25161 system_pods.go:89] "kube-vip-ha-866665-m02" [3b7cb7de-7f7b-4a4c-b328-78ec9e7c0a43] Running
	I0315 06:14:08.282500   25161 system_pods.go:89] "kube-vip-ha-866665-m03" [73e7ac10-6df8-440e-98af-b3724499b73e] Running
	I0315 06:14:08.282503   25161 system_pods.go:89] "storage-provisioner" [b11128b3-f84e-4526-992d-56e278c3f7c9] Running
	I0315 06:14:08.282510   25161 system_pods.go:126] duration metric: took 210.167958ms to wait for k8s-apps to be running ...
	I0315 06:14:08.282517   25161 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 06:14:08.282563   25161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:14:08.302719   25161 system_svc.go:56] duration metric: took 20.192329ms WaitForService to wait for kubelet
	I0315 06:14:08.302752   25161 kubeadm.go:576] duration metric: took 17.518975971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:14:08.302777   25161 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:14:08.467146   25161 request.go:629] Waited for 164.280557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.78:8443/api/v1/nodes
	I0315 06:14:08.467202   25161 round_trippers.go:463] GET https://192.168.39.78:8443/api/v1/nodes
	I0315 06:14:08.467208   25161 round_trippers.go:469] Request Headers:
	I0315 06:14:08.467215   25161 round_trippers.go:473]     Accept: application/json, */*
	I0315 06:14:08.467218   25161 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 06:14:08.472514   25161 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 06:14:08.473633   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473655   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473665   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473668   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473671   25161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:14:08.473675   25161 node_conditions.go:123] node cpu capacity is 2
	I0315 06:14:08.473678   25161 node_conditions.go:105] duration metric: took 170.896148ms to run NodePressure ...
	I0315 06:14:08.473689   25161 start.go:240] waiting for startup goroutines ...
	I0315 06:14:08.473708   25161 start.go:254] writing updated cluster config ...
	I0315 06:14:08.474060   25161 ssh_runner.go:195] Run: rm -f paused
	I0315 06:14:08.528488   25161 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 06:14:08.530950   25161 out.go:177] * Done! kubectl is now configured to use "ha-866665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.081131406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483524081110049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4011d0fe-cb86-4826-b3ed-8d08b002d553 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.082005868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3d4bb0e-327f-4c33-8c8e-88fbc566af3c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.082059915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3d4bb0e-327f-4c33-8c8e-88fbc566af3c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.082385130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3d4bb0e-327f-4c33-8c8e-88fbc566af3c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.119845169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19133bce-70b2-4f10-945b-219d2e63756d name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.119947400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19133bce-70b2-4f10-945b-219d2e63756d name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.121030338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d33c0163-c6ad-4fff-842c-4ed6b0307c0f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.121752033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483524121724670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d33c0163-c6ad-4fff-842c-4ed6b0307c0f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.122421628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6089ab8e-cf8b-4349-8d8d-405dea8e3cf6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.122475256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6089ab8e-cf8b-4349-8d8d-405dea8e3cf6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.122742748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6089ab8e-cf8b-4349-8d8d-405dea8e3cf6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.164276670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1c0a166-25a2-40a4-9f33-dac09356a692 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.164396065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1c0a166-25a2-40a4-9f33-dac09356a692 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.168446060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f262bb3b-5b7a-4827-86e7-b66b4482bb7a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.168861973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483524168840417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f262bb3b-5b7a-4827-86e7-b66b4482bb7a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.169555980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2e69b94-10fd-494b-9a73-b09c16938220 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.169610030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2e69b94-10fd-494b-9a73-b09c16938220 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.169849963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2e69b94-10fd-494b-9a73-b09c16938220 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.211058295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7c5d105-fd82-41e3-9d1d-1c6616bc36b1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.211133849Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7c5d105-fd82-41e3-9d1d-1c6616bc36b1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.212797506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81d0407c-2e54-4142-8d0a-1877e03c2f67 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.213297259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483524213198738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81d0407c-2e54-4142-8d0a-1877e03c2f67 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.213820558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cdf6c84-e634-4322-8bfe-60d4a0aacb54 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.213875057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cdf6c84-e634-4322-8bfe-60d4a0aacb54 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:18:44 ha-866665 crio[677]: time="2024-03-15 06:18:44.214133126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483253186905309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483152994027029,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483153004587866,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083739558319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483083764163164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotation
s:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4339afd096a8f6dc10db520c48d9023653c8863ac73af90612dd8ee31afcf5,PodSandboxId:c25a805c385731c951bfc1bcb27126900e70649835c4a1c51259350eb9f5fc72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483083685068952,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3,PodSandboxId:cb12b8ff5eaf3b545292f6ebc6af9770522b4ca0c7a47079c93559019848a634,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483081587136624,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483078219835245,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90e2aa6abce93f5131dfee441973b15f8e02417beacab47b9fd7deee5f0b123,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483061812739479,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483058653594234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551,PodSandboxId:cb596bb7a70bf6856641a5cb8932d170d22515cce509dd2b38849adf5306095f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483058611597247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483058631943866,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323,PodSandboxId:209132c5db247631f8eb5afb5d8075310aceff8818b605143523c977a1105d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483058546910054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cdf6c84-e634-4322-8bfe-60d4a0aacb54 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3893d7b08f562       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4b1a833979698       busybox-5b5d89c9d6-82knb
	21104767a9371       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   c25a805c38573       storage-provisioner
	652c2ee94f6f3       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   2095201e88b51       kube-vip-ha-866665
	bede6c7f8912b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   89474c2214060       coredns-5dd5756b68-r57px
	c0ecd2e858892       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   72c22c098aee5       coredns-5dd5756b68-mgthb
	2a4339afd096a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   c25a805c38573       storage-provisioner
	7b60508bed4fc       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   cb12b8ff5eaf3       kindnet-9nvvx
	c07640cff4ced       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   e15b87fb1896f       kube-proxy-sbxgg
	a90e2aa6abce9       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   2095201e88b51       kube-vip-ha-866665
	7fcd79ed43f7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   97bf2aa8738ce       kube-scheduler-ha-866665
	adc8145247000       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   682c38a8f4263       etcd-ha-866665
	b639b306bcc41       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   cb596bb7a70bf       kube-apiserver-ha-866665
	dddbd40f934ba       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   209132c5db247       kube-controller-manager-ha-866665
	
	
	==> coredns [bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780] <==
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53874 - 31945 "HINFO IN 7631167108013983909.4597778027584677041. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009476454s
	[INFO] 10.244.0.4:38164 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009591847s
	[INFO] 10.244.1.2:58652 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000766589s
	[INFO] 10.244.1.2:51069 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001862794s
	[INFO] 10.244.0.4:39512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00055199s
	[INFO] 10.244.0.4:46188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133209s
	[INFO] 10.244.0.4:45008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008468s
	[INFO] 10.244.0.4:37076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097079s
	[INFO] 10.244.1.2:45388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815413s
	[INFO] 10.244.1.2:40983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165928s
	[INFO] 10.244.1.2:41822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199064s
	[INFO] 10.244.1.2:51003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093469s
	[INFO] 10.244.2.2:52723 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155039s
	[INFO] 10.244.2.2:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105876s
	[INFO] 10.244.2.2:40110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118647s
	[INFO] 10.244.1.2:48735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190723s
	[INFO] 10.244.1.2:59420 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115761s
	[INFO] 10.244.1.2:44465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090898s
	[INFO] 10.244.2.2:55054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145748s
	[INFO] 10.244.2.2:48352 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081059s
	[INFO] 10.244.0.4:53797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115756s
	[INFO] 10.244.0.4:52841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114315s
	[INFO] 10.244.1.2:34071 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158733s
	[INFO] 10.244.2.2:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239839s
	
	
	==> coredns [c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90] <==
	[INFO] 10.244.0.4:57992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002676087s
	[INFO] 10.244.1.2:60882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021158s
	[INFO] 10.244.1.2:57314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002029124s
	[INFO] 10.244.1.2:55031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271586s
	[INFO] 10.244.1.2:33215 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203658s
	[INFO] 10.244.2.2:48404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148272s
	[INFO] 10.244.2.2:45614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171944s
	[INFO] 10.244.2.2:42730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	[INFO] 10.244.2.2:38361 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605049s
	[INFO] 10.244.2.2:54334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:51787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138576s
	[INFO] 10.244.0.4:35351 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081934s
	[INFO] 10.244.0.4:56185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140731s
	[INFO] 10.244.0.4:49966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062146s
	[INFO] 10.244.1.2:35089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123543s
	[INFO] 10.244.2.2:59029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184488s
	[INFO] 10.244.2.2:57369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103045s
	[INFO] 10.244.0.4:37219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243853s
	[INFO] 10.244.0.4:39054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129011s
	[INFO] 10.244.1.2:38863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321539s
	[INFO] 10.244.1.2:42772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125764s
	[INFO] 10.244.1.2:50426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114767s
	[INFO] 10.244.2.2:48400 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140476s
	[INFO] 10.244.2.2:47852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177728s
	[INFO] 10.244.2.2:44657 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185799s
	
	
	==> describe nodes <==
	Name:               ha-866665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:11:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:14:40 +0000   Fri, 15 Mar 2024 06:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    ha-866665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3eab3c085e414bb06a8b946d23d263
	  System UUID:                3e3eab3c-085e-414b-b06a-8b946d23d263
	  Boot ID:                    67c0c773-5540-4e63-8171-6ccf807dc545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-82knb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 coredns-5dd5756b68-mgthb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m27s
	  kube-system                 coredns-5dd5756b68-r57px             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m27s
	  kube-system                 etcd-ha-866665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m39s
	  kube-system                 kindnet-9nvvx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-apiserver-ha-866665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 kube-controller-manager-ha-866665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 kube-proxy-sbxgg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-scheduler-ha-866665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 kube-vip-ha-866665                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m25s  kube-proxy       
	  Normal  Starting                 7m39s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m39s  kubelet          Node ha-866665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s  kubelet          Node ha-866665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s  kubelet          Node ha-866665 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal  NodeReady                7m21s  kubelet          Node ha-866665 status is now: NodeReady
	  Normal  RegisteredNode           5m55s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal  RegisteredNode           4m40s  node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	
	
	Name:               ha-866665-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:12:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:15:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 06:14:20 +0000   Fri, 15 Mar 2024 06:15:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-866665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 58bd1411345f4ad89979a7572186fe49
	  System UUID:                58bd1411-345f-4ad8-9979-a7572186fe49
	  Boot ID:                    8f53b4f7-489b-4d2a-a47e-7995a970d46a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sdxnc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-866665-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m23s
	  kube-system                 kindnet-26vqf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m23s
	  kube-system                 kube-apiserver-ha-866665-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-866665-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-lqzk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-scheduler-ha-866665-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-866665-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m3s   kube-proxy       
	  Normal  RegisteredNode  6m23s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode  5m55s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  NodeNotReady    2m50s  node-controller  Node ha-866665-m02 status is now: NodeNotReady
	
	
	Name:               ha-866665-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_13_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:13:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:14:17 +0000   Fri, 15 Mar 2024 06:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-866665-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 051bb833ce1b410da5218cd79b3897d3
	  System UUID:                051bb833-ce1b-410d-a521-8cd79b3897d3
	  Boot ID:                    2de5ba98-3540-4b1b-869e-455fabb0f5a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-xc5x4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-866665-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-qr9qm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-ha-866665-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-ha-866665-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-6wxfg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-ha-866665-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-vip-ha-866665-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m55s  kube-proxy       
	  Normal  RegisteredNode  4m58s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal  RegisteredNode  4m55s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	
	
	Name:               ha-866665-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_14_48_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:14:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:14:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-866665-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba1c60db84af4e62b4dd3481111e694e
	  System UUID:                ba1c60db-84af-4e62-b4dd-3481111e694e
	  Boot ID:                    0376ead4-1240-436a-b9a9-8b12bb4d45e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j2vlf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-bq6md    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x5 over 3m58s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x5 over 3m58s)  kubelet          Node ha-866665-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x5 over 3m58s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node ha-866665-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar15 06:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053149] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.657842] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.630134] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.215570] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054962] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.193593] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.117038] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.245141] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.806127] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059748] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.159068] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.996795] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:11] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435] <==
	{"level":"warn","ts":"2024-03-15T06:18:44.462378Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"af74041eca695613","error":"Get \"https://192.168.39.27:2380/version\": dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:18:44.466673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.501304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.514544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.518789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.534323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.541364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.548604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.552834Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.558869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.566782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.567153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.573636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.580139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.583919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.587708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.595157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.601629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.607566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.616087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.622513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.62819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.635119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.640997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:18:44.666814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"af74041eca695613","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 06:18:44 up 8 min,  0 users,  load average: 0.41, 0.38, 0.21
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7b60508bed4fc4b048ea4e69453a24386168c7523f0e5745e560f05877d7a8f3] <==
	I0315 06:18:06.555990       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:18:16.578017       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:18:16.578043       1 main.go:227] handling current node
	I0315 06:18:16.578052       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:18:16.578056       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:18:16.578174       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:18:16.578179       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:18:16.578316       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:18:16.578324       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:18:26.597589       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:18:26.597632       1 main.go:227] handling current node
	I0315 06:18:26.597644       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:18:26.597651       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:18:26.597774       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:18:26.597803       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:18:26.597864       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:18:26.597897       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:18:36.627860       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:18:36.628074       1 main.go:227] handling current node
	I0315 06:18:36.628087       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:18:36.628094       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:18:36.628402       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:18:36.628454       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:18:36.628560       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:18:36.628567       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551] <==
	I0315 06:12:36.017007       1 trace.go:236] Trace[2044672862]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6a85415a-27a0-4bcf-95ba-7853fcf32943,client:192.168.39.27,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:30.425) (total time: 5591ms):
	Trace[2044672862]: ["Create etcd3" audit-id:6a85415a-27a0-4bcf-95ba-7853fcf32943,key:/events/kube-system/kube-vip-ha-866665-m02.17bcdb5b4183cbe5,type:*core.Event,resource:events 5591ms (06:12:30.425)
	Trace[2044672862]:  ---"Txn call succeeded" 5591ms (06:12:36.016)]
	Trace[2044672862]: [5.591828817s] [5.591828817s] END
	I0315 06:12:36.020054       1 trace.go:236] Trace[1564094206]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f9dbe11d-a229-475c-86d6-bddbaa84ba10,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-vzzt5p77xnbzxty72rxwpkluua,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (15-Mar-2024 06:12:30.916) (total time: 5103ms):
	Trace[1564094206]: ["GuaranteedUpdate etcd3" audit-id:f9dbe11d-a229-475c-86d6-bddbaa84ba10,key:/leases/kube-system/apiserver-vzzt5p77xnbzxty72rxwpkluua,type:*coordination.Lease,resource:leases.coordination.k8s.io 5103ms (06:12:30.916)
	Trace[1564094206]:  ---"Txn call completed" 5102ms (06:12:36.019)]
	Trace[1564094206]: [5.10316225s] [5.10316225s] END
	I0315 06:12:36.021281       1 trace.go:236] Trace[517327827]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4ba017d7-8d46-473a-9b10-9c0c7c6551c9,client:192.168.39.78,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-866665-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (15-Mar-2024 06:12:31.993) (total time: 4027ms):
	Trace[517327827]: ["GuaranteedUpdate etcd3" audit-id:4ba017d7-8d46-473a-9b10-9c0c7c6551c9,key:/minions/ha-866665-m02,type:*core.Node,resource:nodes 4027ms (06:12:31.993)
	Trace[517327827]:  ---"Txn call completed" 4022ms (06:12:36.019)]
	Trace[517327827]: ---"About to apply patch" 4023ms (06:12:36.019)
	Trace[517327827]: [4.027362454s] [4.027362454s] END
	I0315 06:12:36.081062       1 trace.go:236] Trace[957907163]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d1fa324d-4ad7-43e8-a882-57dbc52cba26,client:192.168.39.27,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-866665-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (15-Mar-2024 06:12:31.791) (total time: 4288ms):
	Trace[957907163]: ["GuaranteedUpdate etcd3" audit-id:d1fa324d-4ad7-43e8-a882-57dbc52cba26,key:/minions/ha-866665-m02,type:*core.Node,resource:nodes 4282ms (06:12:31.798)
	Trace[957907163]:  ---"Txn call completed" 4217ms (06:12:36.017)
	Trace[957907163]:  ---"Txn call completed" 59ms (06:12:36.079)]
	Trace[957907163]: ---"About to apply patch" 4217ms (06:12:36.017)
	Trace[957907163]: ---"Object stored in database" 61ms (06:12:36.079)
	Trace[957907163]: [4.288548607s] [4.288548607s] END
	I0315 06:12:36.081649       1 trace.go:236] Trace[79545702]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ec3d2183-acde-4229-b714-66b115ad792f,client:192.168.39.27,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:31.060) (total time: 5021ms):
	Trace[79545702]: [5.021096301s] [5.021096301s] END
	I0315 06:12:36.084320       1 trace.go:236] Trace[186789167]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4835e2d2-3441-4e8f-8963-a052fe415079,client:192.168.39.27,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 06:12:30.058) (total time: 6025ms):
	Trace[186789167]: [6.025711189s] [6.025711189s] END
	W0315 06:15:25.721641       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.78 192.168.39.89]
	
	
	==> kube-controller-manager [dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323] <==
	I0315 06:14:10.059049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.382995ms"
	I0315 06:14:10.059756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="148.537µs"
	I0315 06:14:10.167830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="27.291048ms"
	I0315 06:14:10.168892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="167.027µs"
	I0315 06:14:13.455744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.11689ms"
	I0315 06:14:13.455900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.265µs"
	I0315 06:14:13.485160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.262346ms"
	I0315 06:14:13.485357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.298µs"
	I0315 06:14:13.577372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.001517ms"
	I0315 06:14:13.577700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.583µs"
	E0315 06:14:46.312438       1 certificate_controller.go:146] Sync csr-dzhd4 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dzhd4": the object has been modified; please apply your changes to the latest version and try again
	I0315 06:14:47.806996       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-866665-m04\" does not exist"
	I0315 06:14:47.835105       1 range_allocator.go:380] "Set node PodCIDR" node="ha-866665-m04" podCIDRs=["10.244.3.0/24"]
	I0315 06:14:47.859740       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j2vlf"
	I0315 06:14:47.859799       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bq6md"
	I0315 06:14:47.959585       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-626tb"
	I0315 06:14:47.974350       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-cx2hs"
	I0315 06:14:48.070328       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qhf9w"
	I0315 06:14:48.093918       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hhpn4"
	I0315 06:14:51.961094       1 event.go:307] "Event occurred" object="ha-866665-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller"
	I0315 06:14:51.992940       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665-m04"
	I0315 06:14:57.303712       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-866665-m04"
	I0315 06:15:54.074306       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-866665-m04"
	I0315 06:15:54.221006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.676896ms"
	I0315 06:15:54.221352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="196.406µs"
	
	
	==> kube-proxy [c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0] <==
	I0315 06:11:18.764572       1 server_others.go:69] "Using iptables proxy"
	I0315 06:11:18.841281       1 node.go:141] Successfully retrieved node IP: 192.168.39.78
	I0315 06:11:18.911950       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:11:18.912019       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:11:18.915281       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:11:18.915456       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:11:18.916163       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:11:18.916369       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:11:18.919735       1 config.go:188] "Starting service config controller"
	I0315 06:11:18.923685       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:11:18.921381       1 config.go:315] "Starting node config controller"
	I0315 06:11:18.923886       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:11:18.923178       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:11:18.926044       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:11:19.024431       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:11:19.027043       1 shared_informer.go:318] Caches are synced for node config
	I0315 06:11:19.027170       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3] <==
	W0315 06:11:03.233416       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:11:03.233558       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:11:03.291505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:11:03.291601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:11:03.379668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:11:03.379771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:11:03.429320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:11:03.429369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:11:03.470464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:11:03.470509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:11:03.490574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 06:11:03.490720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 06:11:03.581373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:11:03.581558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0315 06:11:06.508617       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 06:13:46.942994       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qr9qm\": pod kindnet-qr9qm is already assigned to node \"ha-866665-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qr9qm" node="ha-866665-m03"
	E0315 06:13:46.943139       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod bd816497-5a8b-4028-9fa5-d4f5739b651e(kube-system/kindnet-qr9qm) wasn't assumed so cannot be forgotten"
	E0315 06:13:46.943288       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qr9qm\": pod kindnet-qr9qm is already assigned to node \"ha-866665-m03\"" pod="kube-system/kindnet-qr9qm"
	I0315 06:13:46.943361       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qr9qm" node="ha-866665-m03"
	E0315 06:13:47.029446       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gwtb2\": pod kindnet-gwtb2 is already assigned to node \"ha-866665-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gwtb2" node="ha-866665-m03"
	E0315 06:13:47.029544       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gwtb2\": pod kindnet-gwtb2 is already assigned to node \"ha-866665-m03\"" pod="kube-system/kindnet-gwtb2"
	E0315 06:14:09.662146       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sdxnc\": pod busybox-5b5d89c9d6-sdxnc is already assigned to node \"ha-866665-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-sdxnc" node="ha-866665-m02"
	E0315 06:14:09.663772       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 48cca13d-39b1-40db-9f6c-1bff9b794de9(default/busybox-5b5d89c9d6-sdxnc) wasn't assumed so cannot be forgotten"
	E0315 06:14:09.664034       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sdxnc\": pod busybox-5b5d89c9d6-sdxnc is already assigned to node \"ha-866665-m02\"" pod="default/busybox-5b5d89c9d6-sdxnc"
	I0315 06:14:09.664892       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-sdxnc" node="ha-866665-m02"
	
	
	==> kubelet <==
	Mar 15 06:14:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:14:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:14:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:14:09 ha-866665 kubelet[1369]: I0315 06:14:09.639426    1369 topology_manager.go:215] "Topology Admit Handler" podUID="c12d72ab-189f-4a4a-a7df-54e10184a9ac" podNamespace="default" podName="busybox-5b5d89c9d6-82knb"
	Mar 15 06:14:09 ha-866665 kubelet[1369]: I0315 06:14:09.809772    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkn2\" (UniqueName: \"kubernetes.io/projected/c12d72ab-189f-4a4a-a7df-54e10184a9ac-kube-api-access-dbkn2\") pod \"busybox-5b5d89c9d6-82knb\" (UID: \"c12d72ab-189f-4a4a-a7df-54e10184a9ac\") " pod="default/busybox-5b5d89c9d6-82knb"
	Mar 15 06:15:05 ha-866665 kubelet[1369]: E0315 06:15:05.569763    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:15:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:15:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:16:05 ha-866665 kubelet[1369]: E0315 06:16:05.567725    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:16:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:16:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:17:05 ha-866665 kubelet[1369]: E0315 06:17:05.567373    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:17:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:17:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:18:05 ha-866665 kubelet[1369]: E0315 06:18:05.569104    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:18:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:18:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:18:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:18:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:261: (dbg) Run:  kubectl --context ha-866665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-866665 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-866665 -v=7 --alsologtostderr
E0315 06:19:21.071778   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:19:48.755564   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:19:58.533000   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-866665 -v=7 --alsologtostderr: exit status 82 (2m2.059178461s)

                                                
                                                
-- stdout --
	* Stopping node "ha-866665-m04"  ...
	* Stopping node "ha-866665-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:18:46.170248   30924 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:18:46.170503   30924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:46.170513   30924 out.go:304] Setting ErrFile to fd 2...
	I0315 06:18:46.170518   30924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:18:46.170716   30924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:18:46.170973   30924 out.go:298] Setting JSON to false
	I0315 06:18:46.171064   30924 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:46.171430   30924 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:46.171515   30924 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:18:46.171701   30924 mustload.go:65] Loading cluster: ha-866665
	I0315 06:18:46.171828   30924 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:18:46.171857   30924 stop.go:39] StopHost: ha-866665-m04
	I0315 06:18:46.172227   30924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:46.172274   30924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:46.188372   30924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0315 06:18:46.188984   30924 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:46.189688   30924 main.go:141] libmachine: Using API Version  1
	I0315 06:18:46.189712   30924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:46.190082   30924 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:46.192695   30924 out.go:177] * Stopping node "ha-866665-m04"  ...
	I0315 06:18:46.194193   30924 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 06:18:46.194219   30924 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:18:46.194444   30924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 06:18:46.194469   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:18:46.197394   30924 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:46.197858   30924 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:14:33 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:18:46.197891   30924 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:18:46.198061   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:18:46.198258   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:18:46.198433   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:18:46.198580   30924 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:18:46.285133   30924 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 06:18:46.341166   30924 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 06:18:46.405404   30924 main.go:141] libmachine: Stopping "ha-866665-m04"...
	I0315 06:18:46.405439   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:46.407062   30924 main.go:141] libmachine: (ha-866665-m04) Calling .Stop
	I0315 06:18:46.410702   30924 main.go:141] libmachine: (ha-866665-m04) Waiting for machine to stop 0/120
	I0315 06:18:47.735617   30924 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:18:47.737379   30924 main.go:141] libmachine: Machine "ha-866665-m04" was stopped.
	I0315 06:18:47.737399   30924 stop.go:75] duration metric: took 1.543208201s to stop
	I0315 06:18:47.737441   30924 stop.go:39] StopHost: ha-866665-m03
	I0315 06:18:47.737812   30924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:18:47.737859   30924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:18:47.754747   30924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0315 06:18:47.755201   30924 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:18:47.755763   30924 main.go:141] libmachine: Using API Version  1
	I0315 06:18:47.755786   30924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:18:47.756101   30924 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:18:47.758251   30924 out.go:177] * Stopping node "ha-866665-m03"  ...
	I0315 06:18:47.759554   30924 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 06:18:47.759573   30924 main.go:141] libmachine: (ha-866665-m03) Calling .DriverName
	I0315 06:18:47.759796   30924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 06:18:47.759823   30924 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHHostname
	I0315 06:18:47.762925   30924 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:47.763407   30924 main.go:141] libmachine: (ha-866665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:48:bb", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:13:06 +0000 UTC Type:0 Mac:52:54:00:76:48:bb Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-866665-m03 Clientid:01:52:54:00:76:48:bb}
	I0315 06:18:47.763449   30924 main.go:141] libmachine: (ha-866665-m03) DBG | domain ha-866665-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:76:48:bb in network mk-ha-866665
	I0315 06:18:47.763674   30924 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHPort
	I0315 06:18:47.763838   30924 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHKeyPath
	I0315 06:18:47.764046   30924 main.go:141] libmachine: (ha-866665-m03) Calling .GetSSHUsername
	I0315 06:18:47.764210   30924 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m03/id_rsa Username:docker}
	I0315 06:18:47.855822   30924 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 06:18:47.909409   30924 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 06:18:47.965821   30924 main.go:141] libmachine: Stopping "ha-866665-m03"...
	I0315 06:18:47.965853   30924 main.go:141] libmachine: (ha-866665-m03) Calling .GetState
	I0315 06:18:47.967563   30924 main.go:141] libmachine: (ha-866665-m03) Calling .Stop
	I0315 06:18:47.971543   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 0/120
	I0315 06:18:48.973092   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 1/120
	I0315 06:18:49.974759   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 2/120
	I0315 06:18:50.976846   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 3/120
	I0315 06:18:51.978454   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 4/120
	I0315 06:18:52.980186   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 5/120
	I0315 06:18:53.981660   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 6/120
	I0315 06:18:54.982990   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 7/120
	I0315 06:18:55.984455   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 8/120
	I0315 06:18:56.985951   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 9/120
	I0315 06:18:57.988384   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 10/120
	I0315 06:18:58.989635   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 11/120
	I0315 06:18:59.991331   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 12/120
	I0315 06:19:00.992803   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 13/120
	I0315 06:19:01.995074   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 14/120
	I0315 06:19:02.996888   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 15/120
	I0315 06:19:03.999086   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 16/120
	I0315 06:19:05.000440   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 17/120
	I0315 06:19:06.003204   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 18/120
	I0315 06:19:07.004645   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 19/120
	I0315 06:19:08.006584   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 20/120
	I0315 06:19:09.008266   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 21/120
	I0315 06:19:10.009714   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 22/120
	I0315 06:19:11.011175   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 23/120
	I0315 06:19:12.013457   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 24/120
	I0315 06:19:13.015722   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 25/120
	I0315 06:19:14.017149   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 26/120
	I0315 06:19:15.018510   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 27/120
	I0315 06:19:16.019731   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 28/120
	I0315 06:19:17.021298   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 29/120
	I0315 06:19:18.023308   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 30/120
	I0315 06:19:19.024551   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 31/120
	I0315 06:19:20.025939   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 32/120
	I0315 06:19:21.028329   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 33/120
	I0315 06:19:22.029815   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 34/120
	I0315 06:19:23.031925   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 35/120
	I0315 06:19:24.033274   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 36/120
	I0315 06:19:25.034912   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 37/120
	I0315 06:19:26.036209   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 38/120
	I0315 06:19:27.037860   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 39/120
	I0315 06:19:28.039599   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 40/120
	I0315 06:19:29.041109   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 41/120
	I0315 06:19:30.042818   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 42/120
	I0315 06:19:31.044176   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 43/120
	I0315 06:19:32.045619   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 44/120
	I0315 06:19:33.047574   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 45/120
	I0315 06:19:34.049165   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 46/120
	I0315 06:19:35.050424   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 47/120
	I0315 06:19:36.052345   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 48/120
	I0315 06:19:37.053927   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 49/120
	I0315 06:19:38.055755   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 50/120
	I0315 06:19:39.057421   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 51/120
	I0315 06:19:40.058794   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 52/120
	I0315 06:19:41.060189   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 53/120
	I0315 06:19:42.061641   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 54/120
	I0315 06:19:43.063504   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 55/120
	I0315 06:19:44.065652   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 56/120
	I0315 06:19:45.067027   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 57/120
	I0315 06:19:46.068572   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 58/120
	I0315 06:19:47.069878   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 59/120
	I0315 06:19:48.071548   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 60/120
	I0315 06:19:49.073066   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 61/120
	I0315 06:19:50.074522   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 62/120
	I0315 06:19:51.075869   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 63/120
	I0315 06:19:52.077356   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 64/120
	I0315 06:19:53.079208   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 65/120
	I0315 06:19:54.080659   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 66/120
	I0315 06:19:55.081954   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 67/120
	I0315 06:19:56.083257   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 68/120
	I0315 06:19:57.084536   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 69/120
	I0315 06:19:58.086228   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 70/120
	I0315 06:19:59.087645   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 71/120
	I0315 06:20:00.089097   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 72/120
	I0315 06:20:01.090391   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 73/120
	I0315 06:20:02.092725   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 74/120
	I0315 06:20:03.094709   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 75/120
	I0315 06:20:04.096267   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 76/120
	I0315 06:20:05.097804   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 77/120
	I0315 06:20:06.099185   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 78/120
	I0315 06:20:07.100668   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 79/120
	I0315 06:20:08.102548   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 80/120
	I0315 06:20:09.104012   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 81/120
	I0315 06:20:10.105335   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 82/120
	I0315 06:20:11.106932   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 83/120
	I0315 06:20:12.108662   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 84/120
	I0315 06:20:13.110430   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 85/120
	I0315 06:20:14.111801   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 86/120
	I0315 06:20:15.113297   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 87/120
	I0315 06:20:16.114584   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 88/120
	I0315 06:20:17.115963   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 89/120
	I0315 06:20:18.117693   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 90/120
	I0315 06:20:19.119090   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 91/120
	I0315 06:20:20.120597   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 92/120
	I0315 06:20:21.122305   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 93/120
	I0315 06:20:22.123769   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 94/120
	I0315 06:20:23.125866   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 95/120
	I0315 06:20:24.127879   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 96/120
	I0315 06:20:25.129765   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 97/120
	I0315 06:20:26.131276   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 98/120
	I0315 06:20:27.132755   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 99/120
	I0315 06:20:28.134110   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 100/120
	I0315 06:20:29.135503   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 101/120
	I0315 06:20:30.136953   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 102/120
	I0315 06:20:31.138654   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 103/120
	I0315 06:20:32.140311   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 104/120
	I0315 06:20:33.142447   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 105/120
	I0315 06:20:34.143999   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 106/120
	I0315 06:20:35.145535   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 107/120
	I0315 06:20:36.147013   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 108/120
	I0315 06:20:37.148499   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 109/120
	I0315 06:20:38.150232   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 110/120
	I0315 06:20:39.151858   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 111/120
	I0315 06:20:40.153509   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 112/120
	I0315 06:20:41.155166   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 113/120
	I0315 06:20:42.156697   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 114/120
	I0315 06:20:43.158366   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 115/120
	I0315 06:20:44.159917   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 116/120
	I0315 06:20:45.162050   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 117/120
	I0315 06:20:46.164068   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 118/120
	I0315 06:20:47.165075   30924 main.go:141] libmachine: (ha-866665-m03) Waiting for machine to stop 119/120
	I0315 06:20:48.166069   30924 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 06:20:48.166135   30924 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 06:20:48.168253   30924 out.go:177] 
	W0315 06:20:48.169763   30924 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 06:20:48.169781   30924 out.go:239] * 
	* 
	W0315 06:20:48.171858   30924 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 06:20:48.174302   30924 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-866665 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-866665 --wait=true -v=7 --alsologtostderr
E0315 06:24:21.072093   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-866665 --wait=true -v=7 --alsologtostderr: exit status 80 (3m55.814738035s)

                                                
                                                
-- stdout --
	* [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	* Updating the running kvm2 "ha-866665" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-866665-m02" control-plane node in "ha-866665" cluster
	* Restarting existing kvm2 VM for "ha-866665-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.78
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.78
	* Verifying Kubernetes components...
	
	* Starting "ha-866665-m03" control-plane node in "ha-866665" cluster
	* Restarting existing kvm2 VM for "ha-866665-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.78,192.168.39.27
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.78
	  - env NO_PROXY=192.168.39.78,192.168.39.27
	* Verifying Kubernetes components...
	
	* Starting "ha-866665-m04" worker node in "ha-866665" cluster
	* Restarting existing kvm2 VM for "ha-866665-m04" ...
	* Updating the running kvm2 "ha-866665-m04" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:20:48.233395   31266 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:20:48.233693   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233703   31266 out.go:304] Setting ErrFile to fd 2...
	I0315 06:20:48.233707   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233974   31266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:20:48.234536   31266 out.go:298] Setting JSON to false
	I0315 06:20:48.235411   31266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3745,"bootTime":1710479904,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:20:48.235482   31266 start.go:139] virtualization: kvm guest
	I0315 06:20:48.238560   31266 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:20:48.240218   31266 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:20:48.240225   31266 notify.go:220] Checking for updates...
	I0315 06:20:48.241922   31266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:20:48.243333   31266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:20:48.244647   31266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:20:48.245904   31266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:20:48.247270   31266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:20:48.249101   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:48.249189   31266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:20:48.249650   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.249692   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.264611   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0315 06:20:48.265138   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.265713   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.265743   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.266115   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.266310   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.301775   31266 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:20:48.303090   31266 start.go:297] selected driver: kvm2
	I0315 06:20:48.303107   31266 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.303243   31266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:20:48.303557   31266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.303624   31266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:20:48.318165   31266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:20:48.318839   31266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:20:48.318901   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:20:48.318914   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:20:48.318976   31266 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.319098   31266 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.320866   31266 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:20:48.322118   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:20:48.322150   31266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:20:48.322163   31266 cache.go:56] Caching tarball of preloaded images
	I0315 06:20:48.322262   31266 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:20:48.322275   31266 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:20:48.322412   31266 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:20:48.322603   31266 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:20:48.322644   31266 start.go:364] duration metric: took 25.657µs to acquireMachinesLock for "ha-866665"
	I0315 06:20:48.322657   31266 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:20:48.322667   31266 fix.go:54] fixHost starting: 
	I0315 06:20:48.322903   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.322934   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.337122   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0315 06:20:48.337522   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.337966   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.337984   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.338306   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.338487   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.338668   31266 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:20:48.340290   31266 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:20:48.340310   31266 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:20:48.342346   31266 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:20:48.343623   31266 machine.go:94] provisionDockerMachine start ...
	I0315 06:20:48.343641   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.343821   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.346289   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346782   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.346824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346966   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.347119   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347285   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347418   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.347544   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.347724   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.347735   31266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:20:48.450351   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.450383   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450661   31266 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:20:48.450684   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450849   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.453380   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453790   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.453818   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453891   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.454090   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454251   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454383   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.454547   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.454720   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.454732   31266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:20:48.576850   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.576878   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.579606   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.579972   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.580005   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.580121   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.580306   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580483   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580636   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.580815   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.581041   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.581065   31266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:20:48.682862   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:20:48.682887   31266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:20:48.682906   31266 buildroot.go:174] setting up certificates
	I0315 06:20:48.682935   31266 provision.go:84] configureAuth start
	I0315 06:20:48.682950   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.683239   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:20:48.686023   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686417   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.686450   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686552   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.688525   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.688908   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.688934   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.689080   31266 provision.go:143] copyHostCerts
	I0315 06:20:48.689110   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689138   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:20:48.689146   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689206   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:20:48.689286   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689314   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:20:48.689321   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689345   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:20:48.689388   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689404   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:20:48.689410   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689430   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:20:48.689471   31266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:20:49.237189   31266 provision.go:177] copyRemoteCerts
	I0315 06:20:49.237247   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:20:49.237269   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.239856   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240163   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.240195   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240300   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.240501   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.240683   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.240845   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:20:49.320109   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:20:49.320179   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:20:49.347303   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:20:49.347368   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:20:49.373709   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:20:49.373780   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:20:49.400806   31266 provision.go:87] duration metric: took 717.857802ms to configureAuth
	I0315 06:20:49.400834   31266 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:20:49.401098   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:49.401246   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.404071   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404492   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.404524   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404710   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.404892   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405052   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405236   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.405428   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:49.405641   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:49.405663   31266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:22:20.418848   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:22:20.418872   31266 machine.go:97] duration metric: took 1m32.075236038s to provisionDockerMachine
	I0315 06:22:20.418884   31266 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:22:20.418893   31266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:22:20.418908   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.419251   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:22:20.419276   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.422223   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422630   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.422653   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422780   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.422931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.423065   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.423242   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.505795   31266 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:22:20.510297   31266 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:22:20.510324   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:22:20.510382   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:22:20.510451   31266 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:22:20.510461   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:22:20.510550   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:22:20.521122   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:20.547933   31266 start.go:296] duration metric: took 129.036646ms for postStartSetup
	I0315 06:22:20.547978   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.548256   31266 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:22:20.548281   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.550824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551345   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.551367   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551588   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.551778   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.551927   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.552071   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:22:20.631948   31266 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:22:20.631985   31266 fix.go:56] duration metric: took 1m32.309321607s for fixHost
	I0315 06:22:20.632007   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.635221   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635666   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.635698   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635839   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.636059   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636205   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636327   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.636488   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:22:20.636663   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:22:20.636675   31266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 06:22:20.737851   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483740.705795243
	
	I0315 06:22:20.737873   31266 fix.go:216] guest clock: 1710483740.705795243
	I0315 06:22:20.737880   31266 fix.go:229] Guest: 2024-03-15 06:22:20.705795243 +0000 UTC Remote: 2024-03-15 06:22:20.631992794 +0000 UTC m=+92.446679747 (delta=73.802449ms)
	I0315 06:22:20.737903   31266 fix.go:200] guest clock delta is within tolerance: 73.802449ms
	I0315 06:22:20.737909   31266 start.go:83] releasing machines lock for "ha-866665", held for 1m32.415256417s
	I0315 06:22:20.737929   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.738195   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:20.741307   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.741994   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.742025   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.742221   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.742829   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743040   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743136   31266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:22:20.743200   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.743336   31266 ssh_runner.go:195] Run: cat /version.json
	I0315 06:22:20.743366   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.746043   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746264   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746484   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746514   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746631   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.746767   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746784   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746801   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.746931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.747000   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747060   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.747123   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.747171   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747308   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.822170   31266 ssh_runner.go:195] Run: systemctl --version
	I0315 06:22:20.864338   31266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:22:21.034553   31266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:22:21.041415   31266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:22:21.041490   31266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:22:21.051566   31266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:22:21.051586   31266 start.go:494] detecting cgroup driver to use...
	I0315 06:22:21.051648   31266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:22:21.068910   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:22:21.083923   31266 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:22:21.083988   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:22:21.099367   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:22:21.114470   31266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:22:21.261920   31266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:22:21.413984   31266 docker.go:233] disabling docker service ...
	I0315 06:22:21.414050   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:22:21.432166   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:22:21.446453   31266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:22:21.603068   31266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:22:21.758747   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:22:21.773638   31266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:22:21.795973   31266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:22:21.796067   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.809281   31266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:22:21.809373   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.820969   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.832684   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.843891   31266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:22:21.855419   31266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:22:21.867162   31266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:22:21.877235   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:22.024876   31266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:22:27.210727   31266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.185810009s)
	I0315 06:22:27.210754   31266 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:22:27.210796   31266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:22:27.215990   31266 start.go:562] Will wait 60s for crictl version
	I0315 06:22:27.216039   31266 ssh_runner.go:195] Run: which crictl
	I0315 06:22:27.219900   31266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:22:27.261162   31266 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:22:27.261285   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.294548   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.328151   31266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:22:27.329667   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:27.332373   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.332800   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:27.332816   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.333023   31266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:22:27.338097   31266 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:22:27.338218   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:22:27.338265   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.384063   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.384086   31266 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:22:27.384141   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.423578   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.423601   31266 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:22:27.423609   31266 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:22:27.423697   31266 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:22:27.423756   31266 ssh_runner.go:195] Run: crio config
	I0315 06:22:27.482626   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:22:27.482649   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:22:27.482662   31266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:22:27.482691   31266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:22:27.482834   31266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:22:27.482850   31266 kube-vip.go:111] generating kube-vip config ...
	I0315 06:22:27.482886   31266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:22:27.497074   31266 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:22:27.497204   31266 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:22:27.497284   31266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:22:27.509195   31266 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:22:27.509286   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:22:27.520191   31266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:22:27.538135   31266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:22:27.555610   31266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:22:27.573955   31266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:22:27.593596   31266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:22:27.598156   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:27.747192   31266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:22:27.764301   31266 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:22:27.764333   31266 certs.go:194] generating shared ca certs ...
	I0315 06:22:27.764355   31266 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.764534   31266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:22:27.764615   31266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:22:27.764630   31266 certs.go:256] generating profile certs ...
	I0315 06:22:27.764730   31266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:22:27.764765   31266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68
	I0315 06:22:27.764786   31266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:22:27.902249   31266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 ...
	I0315 06:22:27.902281   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68: {Name:mk4ec3568f719ba46ca54f4c420840c2b2fdca4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902456   31266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 ...
	I0315 06:22:27.902473   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68: {Name:mka2b45e463d67423a36473df143eb634ee13f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902571   31266 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:22:27.902733   31266 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:22:27.902906   31266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:22:27.902923   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:22:27.902942   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:22:27.902957   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:22:27.902977   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:22:27.903001   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:22:27.903021   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:22:27.903035   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:22:27.903050   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:22:27.903117   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:22:27.903157   31266 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:22:27.903170   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:22:27.903219   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:22:27.903252   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:22:27.903289   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:22:27.903350   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:27.903416   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:22:27.903454   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:22:27.903473   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:27.904019   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:22:27.931140   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:22:27.956928   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:22:27.981629   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:22:28.007100   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 06:22:28.032763   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:22:28.057851   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:22:28.086521   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:22:28.112212   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:22:28.139218   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:22:28.164931   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:22:28.191225   31266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:22:28.209099   31266 ssh_runner.go:195] Run: openssl version
	I0315 06:22:28.215089   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:22:28.226199   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230951   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230998   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.237257   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:22:28.247307   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:22:28.258550   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263269   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263323   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.269418   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:22:28.283320   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:22:28.347367   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358725   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358796   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.386093   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:22:28.404000   31266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:22:28.431851   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:22:28.439647   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:22:28.452233   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:22:28.464741   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:22:28.479804   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:22:28.488488   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:22:28.494840   31266 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:22:28.495020   31266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:22:28.495101   31266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:22:28.554274   31266 cri.go:89] found id: "c6fb756ec96d63a35d3d309d8f9f0e4b3ba437bc3e2ab9b64aeedaefae913df8"
	I0315 06:22:28.554299   31266 cri.go:89] found id: "dcdaf40ca56142d0131435198e249e6b4f6618b31356b7d2753d5ef5312de8d5"
	I0315 06:22:28.554305   31266 cri.go:89] found id: "c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2"
	I0315 06:22:28.554310   31266 cri.go:89] found id: "9b4a5b482d487e39ba565da240819c12b69d88ec3854e05cc308a1d7226aaa46"
	I0315 06:22:28.554314   31266 cri.go:89] found id: "21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0"
	I0315 06:22:28.554317   31266 cri.go:89] found id: "652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855"
	I0315 06:22:28.554322   31266 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:22:28.554325   31266 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:22:28.554329   31266 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:22:28.554336   31266 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:22:28.554340   31266 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:22:28.554343   31266 cri.go:89] found id: "b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551"
	I0315 06:22:28.554348   31266 cri.go:89] found id: "dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323"
	I0315 06:22:28.554351   31266 cri.go:89] found id: ""
	I0315 06:22:28.554403   31266 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-866665 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-866665
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (2.047875151s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:20:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:20:48.233395   31266 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:20:48.233693   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233703   31266 out.go:304] Setting ErrFile to fd 2...
	I0315 06:20:48.233707   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233974   31266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:20:48.234536   31266 out.go:298] Setting JSON to false
	I0315 06:20:48.235411   31266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3745,"bootTime":1710479904,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:20:48.235482   31266 start.go:139] virtualization: kvm guest
	I0315 06:20:48.238560   31266 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:20:48.240218   31266 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:20:48.240225   31266 notify.go:220] Checking for updates...
	I0315 06:20:48.241922   31266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:20:48.243333   31266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:20:48.244647   31266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:20:48.245904   31266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:20:48.247270   31266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:20:48.249101   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:48.249189   31266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:20:48.249650   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.249692   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.264611   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0315 06:20:48.265138   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.265713   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.265743   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.266115   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.266310   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.301775   31266 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:20:48.303090   31266 start.go:297] selected driver: kvm2
	I0315 06:20:48.303107   31266 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.303243   31266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:20:48.303557   31266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.303624   31266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:20:48.318165   31266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:20:48.318839   31266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:20:48.318901   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:20:48.318914   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:20:48.318976   31266 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.319098   31266 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.320866   31266 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:20:48.322118   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:20:48.322150   31266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:20:48.322163   31266 cache.go:56] Caching tarball of preloaded images
	I0315 06:20:48.322262   31266 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:20:48.322275   31266 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:20:48.322412   31266 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:20:48.322603   31266 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:20:48.322644   31266 start.go:364] duration metric: took 25.657µs to acquireMachinesLock for "ha-866665"
	I0315 06:20:48.322657   31266 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:20:48.322667   31266 fix.go:54] fixHost starting: 
	I0315 06:20:48.322903   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.322934   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.337122   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0315 06:20:48.337522   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.337966   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.337984   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.338306   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.338487   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.338668   31266 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:20:48.340290   31266 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:20:48.340310   31266 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:20:48.342346   31266 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:20:48.343623   31266 machine.go:94] provisionDockerMachine start ...
	I0315 06:20:48.343641   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.343821   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.346289   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346782   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.346824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346966   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.347119   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347285   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347418   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.347544   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.347724   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.347735   31266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:20:48.450351   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.450383   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450661   31266 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:20:48.450684   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450849   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.453380   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453790   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.453818   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453891   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.454090   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454251   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454383   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.454547   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.454720   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.454732   31266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:20:48.576850   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.576878   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.579606   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.579972   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.580005   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.580121   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.580306   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580483   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580636   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.580815   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.581041   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.581065   31266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:20:48.682862   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:20:48.682887   31266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:20:48.682906   31266 buildroot.go:174] setting up certificates
	I0315 06:20:48.682935   31266 provision.go:84] configureAuth start
	I0315 06:20:48.682950   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.683239   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:20:48.686023   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686417   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.686450   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686552   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.688525   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.688908   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.688934   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.689080   31266 provision.go:143] copyHostCerts
	I0315 06:20:48.689110   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689138   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:20:48.689146   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689206   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:20:48.689286   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689314   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:20:48.689321   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689345   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:20:48.689388   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689404   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:20:48.689410   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689430   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:20:48.689471   31266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:20:49.237189   31266 provision.go:177] copyRemoteCerts
	I0315 06:20:49.237247   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:20:49.237269   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.239856   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240163   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.240195   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240300   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.240501   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.240683   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.240845   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:20:49.320109   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:20:49.320179   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:20:49.347303   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:20:49.347368   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:20:49.373709   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:20:49.373780   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:20:49.400806   31266 provision.go:87] duration metric: took 717.857802ms to configureAuth
	I0315 06:20:49.400834   31266 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:20:49.401098   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:49.401246   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.404071   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404492   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.404524   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404710   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.404892   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405052   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405236   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.405428   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:49.405641   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:49.405663   31266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:22:20.418848   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:22:20.418872   31266 machine.go:97] duration metric: took 1m32.075236038s to provisionDockerMachine
	I0315 06:22:20.418884   31266 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:22:20.418893   31266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:22:20.418908   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.419251   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:22:20.419276   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.422223   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422630   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.422653   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422780   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.422931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.423065   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.423242   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.505795   31266 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:22:20.510297   31266 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:22:20.510324   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:22:20.510382   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:22:20.510451   31266 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:22:20.510461   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:22:20.510550   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:22:20.521122   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:20.547933   31266 start.go:296] duration metric: took 129.036646ms for postStartSetup
	I0315 06:22:20.547978   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.548256   31266 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:22:20.548281   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.550824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551345   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.551367   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551588   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.551778   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.551927   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.552071   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:22:20.631948   31266 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:22:20.631985   31266 fix.go:56] duration metric: took 1m32.309321607s for fixHost
	I0315 06:22:20.632007   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.635221   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635666   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.635698   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635839   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.636059   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636205   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636327   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.636488   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:22:20.636663   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:22:20.636675   31266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:22:20.737851   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483740.705795243
	
	I0315 06:22:20.737873   31266 fix.go:216] guest clock: 1710483740.705795243
	I0315 06:22:20.737880   31266 fix.go:229] Guest: 2024-03-15 06:22:20.705795243 +0000 UTC Remote: 2024-03-15 06:22:20.631992794 +0000 UTC m=+92.446679747 (delta=73.802449ms)
	I0315 06:22:20.737903   31266 fix.go:200] guest clock delta is within tolerance: 73.802449ms
	I0315 06:22:20.737909   31266 start.go:83] releasing machines lock for "ha-866665", held for 1m32.415256417s
	I0315 06:22:20.737929   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.738195   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:20.741307   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.741994   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.742025   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.742221   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.742829   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743040   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743136   31266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:22:20.743200   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.743336   31266 ssh_runner.go:195] Run: cat /version.json
	I0315 06:22:20.743366   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.746043   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746264   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746484   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746514   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746631   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.746767   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746784   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746801   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.746931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.747000   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747060   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.747123   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.747171   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747308   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.822170   31266 ssh_runner.go:195] Run: systemctl --version
	I0315 06:22:20.864338   31266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:22:21.034553   31266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:22:21.041415   31266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:22:21.041490   31266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:22:21.051566   31266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:22:21.051586   31266 start.go:494] detecting cgroup driver to use...
	I0315 06:22:21.051648   31266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:22:21.068910   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:22:21.083923   31266 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:22:21.083988   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:22:21.099367   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:22:21.114470   31266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:22:21.261920   31266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:22:21.413984   31266 docker.go:233] disabling docker service ...
	I0315 06:22:21.414050   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:22:21.432166   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:22:21.446453   31266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:22:21.603068   31266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:22:21.758747   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:22:21.773638   31266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:22:21.795973   31266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:22:21.796067   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.809281   31266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:22:21.809373   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.820969   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.832684   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.843891   31266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:22:21.855419   31266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:22:21.867162   31266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:22:21.877235   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:22.024876   31266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:22:27.210727   31266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.185810009s)
	I0315 06:22:27.210754   31266 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:22:27.210796   31266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:22:27.215990   31266 start.go:562] Will wait 60s for crictl version
	I0315 06:22:27.216039   31266 ssh_runner.go:195] Run: which crictl
	I0315 06:22:27.219900   31266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:22:27.261162   31266 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:22:27.261285   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.294548   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.328151   31266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:22:27.329667   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:27.332373   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.332800   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:27.332816   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.333023   31266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:22:27.338097   31266 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:22:27.338218   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:22:27.338265   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.384063   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.384086   31266 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:22:27.384141   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.423578   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.423601   31266 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:22:27.423609   31266 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:22:27.423697   31266 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:22:27.423756   31266 ssh_runner.go:195] Run: crio config
	I0315 06:22:27.482626   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:22:27.482649   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:22:27.482662   31266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:22:27.482691   31266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:22:27.482834   31266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:22:27.482850   31266 kube-vip.go:111] generating kube-vip config ...
	I0315 06:22:27.482886   31266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:22:27.497074   31266 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:22:27.497204   31266 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:22:27.497284   31266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:22:27.509195   31266 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:22:27.509286   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:22:27.520191   31266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:22:27.538135   31266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:22:27.555610   31266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:22:27.573955   31266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:22:27.593596   31266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:22:27.598156   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:27.747192   31266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:22:27.764301   31266 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:22:27.764333   31266 certs.go:194] generating shared ca certs ...
	I0315 06:22:27.764355   31266 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.764534   31266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:22:27.764615   31266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:22:27.764630   31266 certs.go:256] generating profile certs ...
	I0315 06:22:27.764730   31266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:22:27.764765   31266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68
	I0315 06:22:27.764786   31266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:22:27.902249   31266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 ...
	I0315 06:22:27.902281   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68: {Name:mk4ec3568f719ba46ca54f4c420840c2b2fdca4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902456   31266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 ...
	I0315 06:22:27.902473   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68: {Name:mka2b45e463d67423a36473df143eb634ee13f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902571   31266 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:22:27.902733   31266 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:22:27.902906   31266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:22:27.902923   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:22:27.902942   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:22:27.902957   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:22:27.902977   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:22:27.903001   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:22:27.903021   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:22:27.903035   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:22:27.903050   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:22:27.903117   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:22:27.903157   31266 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:22:27.903170   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:22:27.903219   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:22:27.903252   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:22:27.903289   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:22:27.903350   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:27.903416   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:22:27.903454   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:22:27.903473   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:27.904019   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:22:27.931140   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:22:27.956928   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:22:27.981629   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:22:28.007100   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 06:22:28.032763   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:22:28.057851   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:22:28.086521   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:22:28.112212   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:22:28.139218   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:22:28.164931   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:22:28.191225   31266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:22:28.209099   31266 ssh_runner.go:195] Run: openssl version
	I0315 06:22:28.215089   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:22:28.226199   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230951   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230998   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.237257   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:22:28.247307   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:22:28.258550   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263269   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263323   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.269418   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:22:28.283320   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:22:28.347367   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358725   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358796   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.386093   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:22:28.404000   31266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:22:28.431851   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:22:28.439647   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:22:28.452233   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:22:28.464741   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:22:28.479804   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:22:28.488488   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:22:28.494840   31266 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:22:28.495020   31266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:22:28.495101   31266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:22:28.554274   31266 cri.go:89] found id: "c6fb756ec96d63a35d3d309d8f9f0e4b3ba437bc3e2ab9b64aeedaefae913df8"
	I0315 06:22:28.554299   31266 cri.go:89] found id: "dcdaf40ca56142d0131435198e249e6b4f6618b31356b7d2753d5ef5312de8d5"
	I0315 06:22:28.554305   31266 cri.go:89] found id: "c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2"
	I0315 06:22:28.554310   31266 cri.go:89] found id: "9b4a5b482d487e39ba565da240819c12b69d88ec3854e05cc308a1d7226aaa46"
	I0315 06:22:28.554314   31266 cri.go:89] found id: "21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0"
	I0315 06:22:28.554317   31266 cri.go:89] found id: "652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855"
	I0315 06:22:28.554322   31266 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:22:28.554325   31266 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:22:28.554329   31266 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:22:28.554336   31266 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:22:28.554340   31266 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:22:28.554343   31266 cri.go:89] found id: "b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551"
	I0315 06:22:28.554348   31266 cri.go:89] found id: "dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323"
	I0315 06:22:28.554351   31266 cri.go:89] found id: ""
	I0315 06:22:28.554403   31266 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.753781324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483884753745362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7f87078-b947-45db-9ce9-4907e7ef1406 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.758929285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72b56a44-16fb-4dff-bb49-1013bae3fa22 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.759066780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72b56a44-16fb-4dff-bb49-1013bae3fa22 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.760275399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5e
b98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc6
9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff
14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State
:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710
483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72b56a44-16fb-4dff-bb49-1013bae3fa22 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.811734801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cc2c3f5-812b-4254-ae7e-7cfedf9f11d2 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.811807320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cc2c3f5-812b-4254-ae7e-7cfedf9f11d2 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.813479101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=473965fd-ae65-4925-8290-77a601b2ca58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.813908600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483884813886668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=473965fd-ae65-4925-8290-77a601b2ca58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.814769866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d48103f4-67fa-4e98-9539-97eaa91c2b5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.814892599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d48103f4-67fa-4e98-9539-97eaa91c2b5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.815600343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5e
b98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc6
9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff
14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State
:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710
483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d48103f4-67fa-4e98-9539-97eaa91c2b5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.863191057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=960b49cf-f67d-4d85-8646-bd349cff5fb4 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.863338796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=960b49cf-f67d-4d85-8646-bd349cff5fb4 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.872080511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e819de13-151d-494f-b645-cd91296b26c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.872756338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483884872651888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e819de13-151d-494f-b645-cd91296b26c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.873421631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e93c6537-20a1-44f1-8915-fe048608483a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.873479061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e93c6537-20a1-44f1-8915-fe048608483a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.873908872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5e
b98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc6
9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff
14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State
:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710
483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e93c6537-20a1-44f1-8915-fe048608483a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.923594619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24a1d9e6-4c05-4708-a7c8-64ed84b92e38 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.923692963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24a1d9e6-4c05-4708-a7c8-64ed84b92e38 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.924973888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06a311ce-3bc1-403d-941b-69b957dd61eb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.926057329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483884926026425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06a311ce-3bc1-403d-941b-69b957dd61eb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.926754282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a2f8a94-cd20-4070-98ff-61f145e48e24 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.926837035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a2f8a94-cd20-4070-98ff-61f145e48e24 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:24:44 ha-866665 crio[3914]: time="2024-03-15 06:24:44.928758333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5e
b98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc6
9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff
14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State
:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710
483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a2f8a94-cd20-4070-98ff-61f145e48e24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a632d3a2baa85       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   c7f6cbdff0a6d       kindnet-9nvvx
	e490c56eb4c5d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   70721076f18d9       kube-controller-manager-ha-866665
	927c05bd830a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       4                   95c517450cdc3       storage-provisioner
	a912dc6e7f806       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   f7b655acbd708       kube-apiserver-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	950153b4c9efe       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   b3fef0e73d7bb       kube-vip-ha-866665
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   79337bac30908       etcd-ha-866665
	002360447d19f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   70721076f18d9       kube-controller-manager-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	a2fe596c61a10       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   f7b655acbd708       kube-apiserver-ha-866665
	8e97e91558ead       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   c7f6cbdff0a6d       kindnet-9nvvx
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	c0c01dd7f22bd       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   2095201e88b51       kube-vip-ha-866665
	3893d7b08f562       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4b1a833979698       busybox-5b5d89c9d6-82knb
	bede6c7f8912b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   89474c2214060       coredns-5dd5756b68-r57px
	c0ecd2e858892       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   72c22c098aee5       coredns-5dd5756b68-mgthb
	c07640cff4ced       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   e15b87fb1896f       kube-proxy-sbxgg
	7fcd79ed43f7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago       Exited              kube-scheduler            0                   97bf2aa8738ce       kube-scheduler-ha-866665
	adc8145247000       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago       Exited              etcd                      0                   682c38a8f4263       etcd-ha-866665
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40960 - 48923 "HINFO IN 5600361727797088866.7930505399270773017. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012438849s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780] <==
	[INFO] 10.244.0.4:38164 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009591847s
	[INFO] 10.244.1.2:58652 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000766589s
	[INFO] 10.244.1.2:51069 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001862794s
	[INFO] 10.244.0.4:39512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00055199s
	[INFO] 10.244.0.4:46188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133209s
	[INFO] 10.244.0.4:45008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008468s
	[INFO] 10.244.0.4:37076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097079s
	[INFO] 10.244.1.2:45388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815413s
	[INFO] 10.244.1.2:40983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165928s
	[INFO] 10.244.1.2:41822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199064s
	[INFO] 10.244.1.2:51003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093469s
	[INFO] 10.244.2.2:52723 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155039s
	[INFO] 10.244.2.2:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105876s
	[INFO] 10.244.2.2:40110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118647s
	[INFO] 10.244.1.2:48735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190723s
	[INFO] 10.244.1.2:59420 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115761s
	[INFO] 10.244.1.2:44465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090898s
	[INFO] 10.244.2.2:55054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145748s
	[INFO] 10.244.2.2:48352 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081059s
	[INFO] 10.244.0.4:53797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115756s
	[INFO] 10.244.0.4:52841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114315s
	[INFO] 10.244.1.2:34071 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158733s
	[INFO] 10.244.2.2:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239839s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90] <==
	[INFO] 10.244.2.2:48404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148272s
	[INFO] 10.244.2.2:45614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171944s
	[INFO] 10.244.2.2:42730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	[INFO] 10.244.2.2:38361 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605049s
	[INFO] 10.244.2.2:54334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:51787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138576s
	[INFO] 10.244.0.4:35351 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081934s
	[INFO] 10.244.0.4:56185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140731s
	[INFO] 10.244.0.4:49966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062146s
	[INFO] 10.244.1.2:35089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123543s
	[INFO] 10.244.2.2:59029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184488s
	[INFO] 10.244.2.2:57369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103045s
	[INFO] 10.244.0.4:37219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243853s
	[INFO] 10.244.0.4:39054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129011s
	[INFO] 10.244.1.2:38863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321539s
	[INFO] 10.244.1.2:42772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125764s
	[INFO] 10.244.1.2:50426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114767s
	[INFO] 10.244.2.2:48400 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140476s
	[INFO] 10.244.2.2:47852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177728s
	[INFO] 10.244.2.2:44657 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185799s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59996 - 31934 "HINFO IN 4559653855558661573.857792855383948485. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019139547s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-866665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:11:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    ha-866665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3eab3c085e414bb06a8b946d23d263
	  System UUID:                3e3eab3c-085e-414b-b06a-8b946d23d263
	  Boot ID:                    67c0c773-5540-4e63-8171-6ccf807dc545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-82knb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-mgthb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-r57px             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-866665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-9nvvx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-866665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-866665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-sbxgg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-866665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-866665                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 89s                    kube-proxy       
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-866665 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-866665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-866665 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-866665 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Warning  ContainerGCFailed        2m40s (x2 over 3m40s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           81s                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           76s                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           21s                    node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	
	
	Name:               ha-866665-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:12:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-866665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 58bd1411345f4ad89979a7572186fe49
	  System UUID:                58bd1411-345f-4ad8-9979-a7572186fe49
	  Boot ID:                    0ce5b345-fbd5-48ed-970b-3bf380d65432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sdxnc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-866665-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-26vqf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-866665-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-866665-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-lqzk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-866665-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-866665-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  RegisteredNode           12m                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  NodeNotReady             8m51s                node-controller  Node ha-866665-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node ha-866665-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node ha-866665-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node ha-866665-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           76s                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           21s                  node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	
	
	Name:               ha-866665-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_13_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:13:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:24:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:24:25 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:24:25 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:24:25 +0000   Fri, 15 Mar 2024 06:13:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:24:25 +0000   Fri, 15 Mar 2024 06:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-866665-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 051bb833ce1b410da5218cd79b3897d3
	  System UUID:                051bb833-ce1b-410d-a521-8cd79b3897d3
	  Boot ID:                    5f14f63f-2d74-40ba-aff3-5786bb58e1cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-xc5x4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-866665-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-qr9qm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-866665-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-866665-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-6wxfg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-866665-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-866665-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10m   kube-proxy       
	  Normal   Starting                 30s   kube-proxy       
	  Normal   RegisteredNode           10m   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal   RegisteredNode           10m   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal   RegisteredNode           10m   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal   RegisteredNode           81s   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal   RegisteredNode           76s   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	  Normal   Starting                 51s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  51s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  51s   kubelet          Node ha-866665-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s   kubelet          Node ha-866665-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s   kubelet          Node ha-866665-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 51s   kubelet          Node ha-866665-m03 has been rebooted, boot id: 5f14f63f-2d74-40ba-aff3-5786bb58e1cb
	  Normal   RegisteredNode           21s   node-controller  Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller
	
	
	Name:               ha-866665-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_14_48_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:14:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:18:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-866665-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba1c60db84af4e62b4dd3481111e694e
	  System UUID:                ba1c60db-84af-4e62-b4dd-3481111e694e
	  Boot ID:                    0376ead4-1240-436a-b9a9-8b12bb4d45e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j2vlf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m58s
	  kube-system                 kube-proxy-bq6md    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x5 over 9m59s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x5 over 9m59s)  kubelet          Node ha-866665-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x5 over 9m59s)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m56s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           9m56s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           9m54s                  node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeReady                9m48s                  kubelet          Node ha-866665-m04 status is now: NodeReady
	  Normal  RegisteredNode           81s                    node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           76s                    node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeNotReady             41s                    node-controller  Node ha-866665-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           21s                    node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	
	
	==> dmesg <==
	[  +0.056814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054962] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.193593] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.117038] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.245141] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.806127] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059748] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.159068] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.996795] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:11] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"warn","ts":"2024-03-15T06:23:48.518762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:23:48.53577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:23:48.635805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"83fde65c75733ea3","from":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T06:23:48.754656Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:48.754778Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:49.503518Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:23:49.511817Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:23:52.757125Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:52.757193Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:54.504338Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:54.512969Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:56.760326Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:56.760395Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:59.505098Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:23:59.513692Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"bd5db29ca66a387","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:24:00.762495Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T06:24:00.762558Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"bd5db29ca66a387","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-15T06:24:02.517417Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.517479Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.517582Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.532034Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"bd5db29ca66a387","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T06:24:02.532145Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.537394Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"bd5db29ca66a387","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T06:24:02.537526Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:14.11881Z","caller":"traceutil/trace.go:171","msg":"trace[1095642664] transaction","detail":"{read_only:false; response_revision:2261; number_of_response:1; }","duration":"107.14777ms","start":"2024-03-15T06:24:14.011626Z","end":"2024-03-15T06:24:14.118774Z","steps":["trace[1095642664] 'process raft request'  (duration: 99.543453ms)"],"step_count":1}
	
	
	==> etcd [adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435] <==
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-15T06:20:49.61568Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:20:49.61574Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:20:49.615908Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:20:49.616111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616182Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616308Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616452Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616527Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616603Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616661Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.61669Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616739Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616805Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617009Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617111Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617201Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.620924Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621045Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621081Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 06:24:45 up 14 min,  0 users,  load average: 0.77, 0.54, 0.34
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2] <==
	I0315 06:22:29.228074       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:22:29.621481       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:22:31.887860       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:22:32.888703       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:22:44.893486       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 06:22:50.321382       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe] <==
	I0315 06:24:15.661597       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:25.670894       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:25.670947       1 main.go:227] handling current node
	I0315 06:24:25.670960       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:25.670967       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:25.671104       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:25.671136       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:25.671214       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:25.671324       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:35.686912       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:35.686961       1 main.go:227] handling current node
	I0315 06:24:35.686972       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:35.686978       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:35.687088       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:35.687093       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:35.687164       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:35.687192       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:45.721842       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:45.721893       1 main.go:227] handling current node
	I0315 06:24:45.721905       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:45.721911       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:45.722013       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:45.722018       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:45.722059       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:45.722064       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13] <==
	I0315 06:22:34.008640       1 options.go:220] external host was not specified, using 192.168.39.78
	I0315 06:22:34.012893       1 server.go:148] Version: v1.28.4
	I0315 06:22:34.013441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:22:34.987333       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0315 06:22:35.012336       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0315 06:22:35.012394       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0315 06:22:35.012698       1 instance.go:298] Using reconciler: lease
	W0315 06:22:54.982973       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0315 06:22:54.985433       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0315 06:22:55.015495       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db] <==
	I0315 06:23:14.774469       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 06:23:14.797211       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:23:14.800478       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:23:14.801403       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0315 06:23:14.801470       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0315 06:23:14.846155       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 06:23:14.872622       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 06:23:14.877871       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 06:23:14.878077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 06:23:14.884210       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 06:23:14.885060       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 06:23:14.885097       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 06:23:14.909035       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 06:23:14.909093       1 aggregator.go:166] initial CRD sync complete...
	I0315 06:23:14.909114       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 06:23:14.909119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 06:23:14.909125       1 cache.go:39] Caches are synced for autoregister controller
	I0315 06:23:14.930947       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0315 06:23:14.944567       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.89]
	I0315 06:23:14.961452       1 controller.go:624] quota admission added evaluator for: endpoints
	I0315 06:23:15.006374       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0315 06:23:15.029557       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0315 06:23:15.787689       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0315 06:23:16.485823       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.78 192.168.39.89]
	W0315 06:23:26.491061       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.78]
	
	
	==> kube-controller-manager [002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c] <==
	I0315 06:22:34.797179       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:22:35.042135       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:22:35.042305       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:22:35.044748       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:22:35.045354       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:22:35.045398       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:22:35.045423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:22:56.022068       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306] <==
	I0315 06:23:29.238203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="218.763µs"
	I0315 06:23:29.238456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.019µs"
	I0315 06:23:29.265639       1 shared_informer.go:318] Caches are synced for resource quota
	I0315 06:23:29.330838       1 shared_informer.go:318] Caches are synced for resource quota
	I0315 06:23:29.333403       1 shared_informer.go:318] Caches are synced for daemon sets
	I0315 06:23:29.363437       1 shared_informer.go:318] Caches are synced for taint
	I0315 06:23:29.363578       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0315 06:23:29.363733       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665-m02"
	I0315 06:23:29.363803       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665-m03"
	I0315 06:23:29.363863       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665-m04"
	I0315 06:23:29.363917       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-866665"
	I0315 06:23:29.363952       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0315 06:23:29.364001       1 taint_manager.go:210] "Sending events to api server"
	I0315 06:23:29.364954       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0315 06:23:29.365111       1 event.go:307] "Event occurred" object="ha-866665-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller"
	I0315 06:23:29.365144       1 event.go:307] "Event occurred" object="ha-866665-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller"
	I0315 06:23:29.365153       1 event.go:307] "Event occurred" object="ha-866665-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller"
	I0315 06:23:29.365159       1 event.go:307] "Event occurred" object="ha-866665" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665 event: Registered Node ha-866665 in Controller"
	I0315 06:23:29.736766       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 06:23:29.736836       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0315 06:23:29.736916       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 06:23:55.619407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.467814ms"
	I0315 06:23:55.619642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.623µs"
	I0315 06:24:17.984045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.093666ms"
	I0315 06:24:17.984190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.784µs"
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	I0315 06:22:35.094334       1 server_others.go:69] "Using iptables proxy"
	E0315 06:22:38.033980       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:41.104106       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:44.177695       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:50.321628       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:59.539487       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:23:15.828877       1 node.go:141] Successfully retrieved node IP: 192.168.39.78
	I0315 06:23:15.901813       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:23:15.902012       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:23:15.915302       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:23:15.915863       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:23:15.916800       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:23:15.917093       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:23:15.920115       1 config.go:188] "Starting service config controller"
	I0315 06:23:15.920210       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:23:15.920400       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:23:15.920457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:23:15.921053       1 config.go:315] "Starting node config controller"
	I0315 06:23:15.921167       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:23:16.021446       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:23:16.021555       1 shared_informer.go:318] Caches are synced for node config
	I0315 06:23:16.021576       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0] <==
	E0315 06:19:42.930531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:42.930409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:42.930653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:46.002420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:46.002515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:52.148400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:52.148636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:04.433846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:04.434046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:16.720661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:16.721007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.937899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.938058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:44.369019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:44.369356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3] <==
	E0315 06:20:46.187160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.254890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:20:46.255001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:20:46.269076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.269195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.317565       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:20:46.317635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:20:46.544563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:20:46.544616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:20:46.741363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 06:20:46.741423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 06:20:46.762451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 06:20:46.762543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 06:20:46.876133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.876166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:47.365393       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:20:47.365501       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:20:47.451958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:20:47.452070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:20:47.587631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:20:47.587662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0315 06:20:49.515923       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:20:49.516074       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:20:49.519966       1 run.go:74] "command failed" err="finished without leader elect"
	I0315 06:20:49.520010       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	W0315 06:23:05.725626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:05.725696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:05.802921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:05.802992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:11.310012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:11.310150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.617979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.618154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.695622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.696158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.716039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.716075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.737033       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.737492       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:14.814160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:23:14.823942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:23:14.824443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:23:14.826409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:23:14.823833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:23:14.824501       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:23:14.823561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:23:14.828473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:23:14.828581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:23:14.828694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0315 06:23:34.236426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:23:18 ha-866665 kubelet[1369]: E0315 06:23:18.461158    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:23:20 ha-866665 kubelet[1369]: I0315 06:23:20.544455    1369 scope.go:117] "RemoveContainer" containerID="8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2"
	Mar 15 06:23:20 ha-866665 kubelet[1369]: E0315 06:23:20.544739    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:23:30 ha-866665 kubelet[1369]: I0315 06:23:30.544500    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:23:30 ha-866665 kubelet[1369]: E0315 06:23:30.545188    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:23:31 ha-866665 kubelet[1369]: I0315 06:23:31.545640    1369 scope.go:117] "RemoveContainer" containerID="8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2"
	Mar 15 06:23:31 ha-866665 kubelet[1369]: E0315 06:23:31.546530    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:23:36 ha-866665 kubelet[1369]: I0315 06:23:36.249403    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-82knb" podStartSLOduration=564.519836319 podCreationTimestamp="2024-03-15 06:14:09 +0000 UTC" firstStartedPulling="2024-03-15 06:14:10.438649175 +0000 UTC m=+185.071777566" lastFinishedPulling="2024-03-15 06:14:13.168136751 +0000 UTC m=+187.801265151" observedRunningTime="2024-03-15 06:14:13.409107177 +0000 UTC m=+188.042235586" watchObservedRunningTime="2024-03-15 06:23:36.249323904 +0000 UTC m=+750.882452314"
	Mar 15 06:23:43 ha-866665 kubelet[1369]: I0315 06:23:43.544739    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:23:43 ha-866665 kubelet[1369]: E0315 06:23:43.545086    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:23:44 ha-866665 kubelet[1369]: I0315 06:23:44.544510    1369 scope.go:117] "RemoveContainer" containerID="8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2"
	Mar 15 06:23:56 ha-866665 kubelet[1369]: I0315 06:23:56.544678    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:23:56 ha-866665 kubelet[1369]: E0315 06:23:56.546129    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:05 ha-866665 kubelet[1369]: E0315 06:24:05.567838    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:24:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:24:11 ha-866665 kubelet[1369]: I0315 06:24:11.544731    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:11 ha-866665 kubelet[1369]: E0315 06:24:11.545282    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:22 ha-866665 kubelet[1369]: I0315 06:24:22.543789    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:22 ha-866665 kubelet[1369]: E0315 06:24:22.543999    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:33 ha-866665 kubelet[1369]: I0315 06:24:33.544155    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:33 ha-866665 kubelet[1369]: E0315 06:24:33.544623    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:45 ha-866665 kubelet[1369]: I0315 06:24:45.544310    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:24:44.434562   32248 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:261: (dbg) Run:  kubectl --context ha-866665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 node delete m03 -v=7 --alsologtostderr
E0315 06:24:58.534486   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 node delete m03 -v=7 --alsologtostderr: (16.646610834s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 2 (658.167512ms)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:25:03.528841   32573 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:25:03.528977   32573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:25:03.528989   32573 out.go:304] Setting ErrFile to fd 2...
	I0315 06:25:03.528997   32573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:25:03.529571   32573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:25:03.529885   32573 out.go:298] Setting JSON to false
	I0315 06:25:03.529910   32573 mustload.go:65] Loading cluster: ha-866665
	I0315 06:25:03.530448   32573 notify.go:220] Checking for updates...
	I0315 06:25:03.531057   32573 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:25:03.531079   32573 status.go:255] checking status of ha-866665 ...
	I0315 06:25:03.531524   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.531580   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.553833   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0315 06:25:03.554350   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.554885   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.554909   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.555254   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.555504   32573 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:25:03.557424   32573 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:25:03.557443   32573 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:25:03.557709   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.557746   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.573474   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0315 06:25:03.573887   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.574334   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.574363   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.574733   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.574902   32573 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:25:03.578410   32573 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:25:03.579112   32573 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:25:03.579135   32573 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:25:03.579304   32573 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:25:03.579629   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.579665   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.596285   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0315 06:25:03.596836   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.597301   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.597324   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.597634   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.597828   32573 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:25:03.597987   32573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:25:03.598017   32573 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:25:03.601115   32573 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:25:03.601553   32573 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:25:03.601582   32573 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:25:03.601799   32573 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:25:03.601982   32573 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:25:03.602125   32573 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:25:03.602258   32573 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:25:03.684934   32573 ssh_runner.go:195] Run: systemctl --version
	I0315 06:25:03.692002   32573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:25:03.708791   32573 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:25:03.708820   32573 api_server.go:166] Checking apiserver status ...
	I0315 06:25:03.708853   32573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:25:03.727134   32573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5144/cgroup
	W0315 06:25:03.738827   32573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:25:03.738894   32573 ssh_runner.go:195] Run: ls
	I0315 06:25:03.743532   32573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:25:03.750752   32573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:25:03.750774   32573 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:25:03.750783   32573 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:25:03.750798   32573 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:25:03.751081   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.751119   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.767088   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44545
	I0315 06:25:03.767601   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.768080   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.768101   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.768540   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.768727   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:25:03.770193   32573 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:25:03.770209   32573 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:25:03.770478   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.770520   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.786719   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0315 06:25:03.787197   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.787670   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.787695   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.788034   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.788281   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:25:03.791759   32573 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:03.792168   32573 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:22:39 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:25:03.792199   32573 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:03.792367   32573 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:25:03.792742   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.792784   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.808943   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0315 06:25:03.809439   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.810086   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.810113   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.810451   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.810651   32573 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:25:03.810813   32573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:25:03.810830   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:25:03.813854   32573 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:03.814339   32573 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:22:39 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:25:03.814370   32573 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:03.814455   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:25:03.814629   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:25:03.814783   32573 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:25:03.814968   32573 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:25:03.907470   32573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:25:03.927116   32573 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:25:03.927144   32573 api_server.go:166] Checking apiserver status ...
	I0315 06:25:03.927218   32573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:25:03.944933   32573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	W0315 06:25:03.955760   32573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:25:03.955818   32573 ssh_runner.go:195] Run: ls
	I0315 06:25:03.961093   32573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:25:03.968707   32573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 06:25:03.968733   32573 status.go:422] ha-866665-m02 apiserver status = Running (err=<nil>)
	I0315 06:25:03.968741   32573 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:25:03.968756   32573 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:25:03.969054   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.969095   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:03.983926   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I0315 06:25:03.984392   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:03.984896   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:03.984911   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:03.985283   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:03.985471   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:25:03.987203   32573 status.go:330] ha-866665-m04 host status = "Running" (err=<nil>)
	I0315 06:25:03.987221   32573 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:25:03.987561   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:03.987595   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:04.002340   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0315 06:25:04.002699   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:04.003175   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:04.003195   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:04.003552   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:04.003741   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetIP
	I0315 06:25:04.006570   32573 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:04.007009   32573 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:24:35 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:25:04.007041   32573 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:04.007186   32573 host.go:66] Checking if "ha-866665-m04" exists ...
	I0315 06:25:04.007558   32573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:04.007599   32573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:04.023103   32573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0315 06:25:04.023650   32573 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:04.024169   32573 main.go:141] libmachine: Using API Version  1
	I0315 06:25:04.024192   32573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:04.024584   32573 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:04.024809   32573 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:25:04.025011   32573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:25:04.025035   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:25:04.028157   32573 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:04.028631   32573 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:24:35 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:25:04.028656   32573 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:04.028829   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:25:04.029020   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:25:04.029203   32573 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:25:04.029342   32573 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:25:04.112796   32573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:25:04.128972   32573 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.8608127s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	| node    | ha-866665 node delete m03 -v=7                                                   | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC | 15 Mar 24 06:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:20:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:20:48.233395   31266 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:20:48.233693   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233703   31266 out.go:304] Setting ErrFile to fd 2...
	I0315 06:20:48.233707   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233974   31266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:20:48.234536   31266 out.go:298] Setting JSON to false
	I0315 06:20:48.235411   31266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3745,"bootTime":1710479904,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:20:48.235482   31266 start.go:139] virtualization: kvm guest
	I0315 06:20:48.238560   31266 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:20:48.240218   31266 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:20:48.240225   31266 notify.go:220] Checking for updates...
	I0315 06:20:48.241922   31266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:20:48.243333   31266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:20:48.244647   31266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:20:48.245904   31266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:20:48.247270   31266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:20:48.249101   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:48.249189   31266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:20:48.249650   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.249692   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.264611   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0315 06:20:48.265138   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.265713   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.265743   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.266115   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.266310   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.301775   31266 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:20:48.303090   31266 start.go:297] selected driver: kvm2
	I0315 06:20:48.303107   31266 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.303243   31266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:20:48.303557   31266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.303624   31266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:20:48.318165   31266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:20:48.318839   31266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:20:48.318901   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:20:48.318914   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:20:48.318976   31266 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.319098   31266 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.320866   31266 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:20:48.322118   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:20:48.322150   31266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:20:48.322163   31266 cache.go:56] Caching tarball of preloaded images
	I0315 06:20:48.322262   31266 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:20:48.322275   31266 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:20:48.322412   31266 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:20:48.322603   31266 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:20:48.322644   31266 start.go:364] duration metric: took 25.657µs to acquireMachinesLock for "ha-866665"
	I0315 06:20:48.322657   31266 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:20:48.322667   31266 fix.go:54] fixHost starting: 
	I0315 06:20:48.322903   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.322934   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.337122   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0315 06:20:48.337522   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.337966   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.337984   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.338306   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.338487   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.338668   31266 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:20:48.340290   31266 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:20:48.340310   31266 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:20:48.342346   31266 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:20:48.343623   31266 machine.go:94] provisionDockerMachine start ...
	I0315 06:20:48.343641   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.343821   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.346289   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346782   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.346824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346966   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.347119   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347285   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347418   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.347544   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.347724   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.347735   31266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:20:48.450351   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.450383   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450661   31266 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:20:48.450684   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450849   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.453380   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453790   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.453818   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453891   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.454090   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454251   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454383   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.454547   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.454720   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.454732   31266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:20:48.576850   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.576878   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.579606   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.579972   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.580005   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.580121   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.580306   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580483   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580636   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.580815   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.581041   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.581065   31266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:20:48.682862   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:20:48.682887   31266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:20:48.682906   31266 buildroot.go:174] setting up certificates
	I0315 06:20:48.682935   31266 provision.go:84] configureAuth start
	I0315 06:20:48.682950   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.683239   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:20:48.686023   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686417   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.686450   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686552   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.688525   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.688908   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.688934   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.689080   31266 provision.go:143] copyHostCerts
	I0315 06:20:48.689110   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689138   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:20:48.689146   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689206   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:20:48.689286   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689314   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:20:48.689321   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689345   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:20:48.689388   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689404   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:20:48.689410   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689430   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:20:48.689471   31266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:20:49.237189   31266 provision.go:177] copyRemoteCerts
	I0315 06:20:49.237247   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:20:49.237269   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.239856   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240163   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.240195   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240300   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.240501   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.240683   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.240845   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:20:49.320109   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:20:49.320179   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:20:49.347303   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:20:49.347368   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:20:49.373709   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:20:49.373780   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:20:49.400806   31266 provision.go:87] duration metric: took 717.857802ms to configureAuth
	I0315 06:20:49.400834   31266 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:20:49.401098   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:49.401246   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.404071   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404492   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.404524   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404710   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.404892   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405052   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405236   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.405428   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:49.405641   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:49.405663   31266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:22:20.418848   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:22:20.418872   31266 machine.go:97] duration metric: took 1m32.075236038s to provisionDockerMachine
	I0315 06:22:20.418884   31266 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:22:20.418893   31266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:22:20.418908   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.419251   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:22:20.419276   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.422223   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422630   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.422653   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422780   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.422931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.423065   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.423242   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.505795   31266 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:22:20.510297   31266 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:22:20.510324   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:22:20.510382   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:22:20.510451   31266 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:22:20.510461   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:22:20.510550   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:22:20.521122   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:20.547933   31266 start.go:296] duration metric: took 129.036646ms for postStartSetup
	I0315 06:22:20.547978   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.548256   31266 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:22:20.548281   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.550824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551345   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.551367   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551588   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.551778   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.551927   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.552071   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:22:20.631948   31266 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:22:20.631985   31266 fix.go:56] duration metric: took 1m32.309321607s for fixHost
	I0315 06:22:20.632007   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.635221   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635666   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.635698   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635839   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.636059   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636205   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636327   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.636488   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:22:20.636663   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:22:20.636675   31266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:22:20.737851   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483740.705795243
	
	I0315 06:22:20.737873   31266 fix.go:216] guest clock: 1710483740.705795243
	I0315 06:22:20.737880   31266 fix.go:229] Guest: 2024-03-15 06:22:20.705795243 +0000 UTC Remote: 2024-03-15 06:22:20.631992794 +0000 UTC m=+92.446679747 (delta=73.802449ms)
	I0315 06:22:20.737903   31266 fix.go:200] guest clock delta is within tolerance: 73.802449ms
	I0315 06:22:20.737909   31266 start.go:83] releasing machines lock for "ha-866665", held for 1m32.415256417s
	I0315 06:22:20.737929   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.738195   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:20.741307   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.741994   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.742025   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.742221   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.742829   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743040   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743136   31266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:22:20.743200   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.743336   31266 ssh_runner.go:195] Run: cat /version.json
	I0315 06:22:20.743366   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.746043   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746264   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746484   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746514   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746631   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.746767   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746784   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746801   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.746931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.747000   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747060   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.747123   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.747171   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747308   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.822170   31266 ssh_runner.go:195] Run: systemctl --version
	I0315 06:22:20.864338   31266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:22:21.034553   31266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:22:21.041415   31266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:22:21.041490   31266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:22:21.051566   31266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:22:21.051586   31266 start.go:494] detecting cgroup driver to use...
	I0315 06:22:21.051648   31266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:22:21.068910   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:22:21.083923   31266 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:22:21.083988   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:22:21.099367   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:22:21.114470   31266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:22:21.261920   31266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:22:21.413984   31266 docker.go:233] disabling docker service ...
	I0315 06:22:21.414050   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:22:21.432166   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:22:21.446453   31266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:22:21.603068   31266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:22:21.758747   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:22:21.773638   31266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:22:21.795973   31266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:22:21.796067   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.809281   31266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:22:21.809373   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.820969   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.832684   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.843891   31266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:22:21.855419   31266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:22:21.867162   31266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:22:21.877235   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:22.024876   31266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:22:27.210727   31266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.185810009s)
	I0315 06:22:27.210754   31266 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:22:27.210796   31266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:22:27.215990   31266 start.go:562] Will wait 60s for crictl version
	I0315 06:22:27.216039   31266 ssh_runner.go:195] Run: which crictl
	I0315 06:22:27.219900   31266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:22:27.261162   31266 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:22:27.261285   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.294548   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.328151   31266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:22:27.329667   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:27.332373   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.332800   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:27.332816   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.333023   31266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:22:27.338097   31266 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:22:27.338218   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:22:27.338265   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.384063   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.384086   31266 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:22:27.384141   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.423578   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.423601   31266 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:22:27.423609   31266 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:22:27.423697   31266 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:22:27.423756   31266 ssh_runner.go:195] Run: crio config
	I0315 06:22:27.482626   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:22:27.482649   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:22:27.482662   31266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:22:27.482691   31266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:22:27.482834   31266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:22:27.482850   31266 kube-vip.go:111] generating kube-vip config ...
	I0315 06:22:27.482886   31266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:22:27.497074   31266 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:22:27.497204   31266 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:22:27.497284   31266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:22:27.509195   31266 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:22:27.509286   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:22:27.520191   31266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:22:27.538135   31266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:22:27.555610   31266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:22:27.573955   31266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:22:27.593596   31266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:22:27.598156   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:27.747192   31266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:22:27.764301   31266 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:22:27.764333   31266 certs.go:194] generating shared ca certs ...
	I0315 06:22:27.764355   31266 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.764534   31266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:22:27.764615   31266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:22:27.764630   31266 certs.go:256] generating profile certs ...
	I0315 06:22:27.764730   31266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:22:27.764765   31266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68
	I0315 06:22:27.764786   31266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:22:27.902249   31266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 ...
	I0315 06:22:27.902281   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68: {Name:mk4ec3568f719ba46ca54f4c420840c2b2fdca4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902456   31266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 ...
	I0315 06:22:27.902473   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68: {Name:mka2b45e463d67423a36473df143eb634ee13f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902571   31266 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:22:27.902733   31266 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:22:27.902906   31266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:22:27.902923   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:22:27.902942   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:22:27.902957   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:22:27.902977   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:22:27.903001   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:22:27.903021   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:22:27.903035   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:22:27.903050   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:22:27.903117   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:22:27.903157   31266 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:22:27.903170   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:22:27.903219   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:22:27.903252   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:22:27.903289   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:22:27.903350   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:27.903416   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:22:27.903454   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:22:27.903473   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:27.904019   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:22:27.931140   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:22:27.956928   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:22:27.981629   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:22:28.007100   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 06:22:28.032763   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:22:28.057851   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:22:28.086521   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:22:28.112212   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:22:28.139218   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:22:28.164931   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:22:28.191225   31266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:22:28.209099   31266 ssh_runner.go:195] Run: openssl version
	I0315 06:22:28.215089   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:22:28.226199   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230951   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230998   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.237257   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:22:28.247307   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:22:28.258550   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263269   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263323   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.269418   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:22:28.283320   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:22:28.347367   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358725   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358796   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.386093   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:22:28.404000   31266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:22:28.431851   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:22:28.439647   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:22:28.452233   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:22:28.464741   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:22:28.479804   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:22:28.488488   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:22:28.494840   31266 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:22:28.495020   31266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:22:28.495101   31266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:22:28.554274   31266 cri.go:89] found id: "c6fb756ec96d63a35d3d309d8f9f0e4b3ba437bc3e2ab9b64aeedaefae913df8"
	I0315 06:22:28.554299   31266 cri.go:89] found id: "dcdaf40ca56142d0131435198e249e6b4f6618b31356b7d2753d5ef5312de8d5"
	I0315 06:22:28.554305   31266 cri.go:89] found id: "c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2"
	I0315 06:22:28.554310   31266 cri.go:89] found id: "9b4a5b482d487e39ba565da240819c12b69d88ec3854e05cc308a1d7226aaa46"
	I0315 06:22:28.554314   31266 cri.go:89] found id: "21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0"
	I0315 06:22:28.554317   31266 cri.go:89] found id: "652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855"
	I0315 06:22:28.554322   31266 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:22:28.554325   31266 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:22:28.554329   31266 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:22:28.554336   31266 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:22:28.554340   31266 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:22:28.554343   31266 cri.go:89] found id: "b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551"
	I0315 06:22:28.554348   31266 cri.go:89] found id: "dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323"
	I0315 06:22:28.554351   31266 cri.go:89] found id: ""
	I0315 06:22:28.554403   31266 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.791613903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483904791582620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7211632a-4067-4222-994e-76ed8c3aa95c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.792376236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=930e5080-4138-4b0d-9689-bdbe5bd7c0fd name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.792434053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=930e5080-4138-4b0d-9689-bdbe5bd7c0fd name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.792977843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b6040
9e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483058632020
970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=930e5080-4138-4b0d-9689-bdbe5bd7c0fd name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.853173983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=773cb5cf-79cb-41ae-92a0-35c6666001e8 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.853312040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=773cb5cf-79cb-41ae-92a0-35c6666001e8 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.854611109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dc337e9-790a-4190-8bc6-f0c2eb10c63a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.855407372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483904855380768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dc337e9-790a-4190-8bc6-f0c2eb10c63a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.856264366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d2c4864-09f4-4139-89ad-db7b5d996c9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.856328185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d2c4864-09f4-4139-89ad-db7b5d996c9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.856816213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b6040
9e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483058632020
970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d2c4864-09f4-4139-89ad-db7b5d996c9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.906433367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c2a1d2d-6a6b-431a-8c87-309ccc5e4edd name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.906531900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c2a1d2d-6a6b-431a-8c87-309ccc5e4edd name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.908460863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1bfc98c-1a97-47f6-8df2-b05789bb7859 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.908968418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483904908942809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1bfc98c-1a97-47f6-8df2-b05789bb7859 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.909775262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26451bfe-ac84-4bfc-925b-1a3f8f3e8cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.909854902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26451bfe-ac84-4bfc-925b-1a3f8f3e8cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.910425726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b6040
9e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483058632020
970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26451bfe-ac84-4bfc-925b-1a3f8f3e8cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.955714267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0cbdd00-9c6b-4fee-be73-38223b2e35a3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.955820711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0cbdd00-9c6b-4fee-be73-38223b2e35a3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.957037050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d016d65-bcde-4dd7-a461-28860a39f2f1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.957622645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710483904957597477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d016d65-bcde-4dd7-a461-28860a39f2f1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.958167979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c81cc1f0-f892-4a37-b270-ab03953e9c6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.958281066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c81cc1f0-f892-4a37-b270-ab03953e9c6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:25:04 ha-866665 crio[3914]: time="2024-03-15 06:25:04.959384203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483795596484926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710483792566381329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b6040
9e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710483753272873687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483748777541212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2,PodSandboxId:2095201e88b515e4724e2559e88a3cc7a779a36e2768302e1fe314c8264c99db,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483554558436713,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483058632020
970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c81cc1f0-f892-4a37-b270-ab03953e9c6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5eaa4a539d19c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 seconds ago       Running             storage-provisioner       5                   95c517450cdc3       storage-provisioner
	a632d3a2baa85       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   c7f6cbdff0a6d       kindnet-9nvvx
	e490c56eb4c5d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   70721076f18d9       kube-controller-manager-ha-866665
	927c05bd830a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       4                   95c517450cdc3       storage-provisioner
	a912dc6e7f806       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   f7b655acbd708       kube-apiserver-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	950153b4c9efe       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   b3fef0e73d7bb       kube-vip-ha-866665
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   79337bac30908       etcd-ha-866665
	002360447d19f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   70721076f18d9       kube-controller-manager-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	a2fe596c61a10       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   f7b655acbd708       kube-apiserver-ha-866665
	8e97e91558ead       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   c7f6cbdff0a6d       kindnet-9nvvx
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	c0c01dd7f22bd       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   2095201e88b51       kube-vip-ha-866665
	3893d7b08f562       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4b1a833979698       busybox-5b5d89c9d6-82knb
	bede6c7f8912b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   89474c2214060       coredns-5dd5756b68-r57px
	c0ecd2e858892       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   72c22c098aee5       coredns-5dd5756b68-mgthb
	c07640cff4ced       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   e15b87fb1896f       kube-proxy-sbxgg
	7fcd79ed43f7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      14 minutes ago       Exited              kube-scheduler            0                   97bf2aa8738ce       kube-scheduler-ha-866665
	adc8145247000       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      14 minutes ago       Exited              etcd                      0                   682c38a8f4263       etcd-ha-866665
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40960 - 48923 "HINFO IN 5600361727797088866.7930505399270773017. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012438849s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780] <==
	[INFO] 10.244.0.4:38164 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009591847s
	[INFO] 10.244.1.2:58652 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000766589s
	[INFO] 10.244.1.2:51069 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001862794s
	[INFO] 10.244.0.4:39512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00055199s
	[INFO] 10.244.0.4:46188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133209s
	[INFO] 10.244.0.4:45008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008468s
	[INFO] 10.244.0.4:37076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097079s
	[INFO] 10.244.1.2:45388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815413s
	[INFO] 10.244.1.2:40983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165928s
	[INFO] 10.244.1.2:41822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199064s
	[INFO] 10.244.1.2:51003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093469s
	[INFO] 10.244.2.2:52723 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155039s
	[INFO] 10.244.2.2:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105876s
	[INFO] 10.244.2.2:40110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118647s
	[INFO] 10.244.1.2:48735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190723s
	[INFO] 10.244.1.2:59420 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115761s
	[INFO] 10.244.1.2:44465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090898s
	[INFO] 10.244.2.2:55054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145748s
	[INFO] 10.244.2.2:48352 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081059s
	[INFO] 10.244.0.4:53797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115756s
	[INFO] 10.244.0.4:52841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114315s
	[INFO] 10.244.1.2:34071 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158733s
	[INFO] 10.244.2.2:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239839s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90] <==
	[INFO] 10.244.2.2:48404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148272s
	[INFO] 10.244.2.2:45614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171944s
	[INFO] 10.244.2.2:42730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	[INFO] 10.244.2.2:38361 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605049s
	[INFO] 10.244.2.2:54334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:51787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138576s
	[INFO] 10.244.0.4:35351 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081934s
	[INFO] 10.244.0.4:56185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140731s
	[INFO] 10.244.0.4:49966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062146s
	[INFO] 10.244.1.2:35089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123543s
	[INFO] 10.244.2.2:59029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184488s
	[INFO] 10.244.2.2:57369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103045s
	[INFO] 10.244.0.4:37219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243853s
	[INFO] 10.244.0.4:39054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129011s
	[INFO] 10.244.1.2:38863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321539s
	[INFO] 10.244.1.2:42772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125764s
	[INFO] 10.244.1.2:50426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114767s
	[INFO] 10.244.2.2:48400 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140476s
	[INFO] 10.244.2.2:47852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177728s
	[INFO] 10.244.2.2:44657 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185799s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59996 - 31934 "HINFO IN 4559653855558661573.857792855383948485. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019139547s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-866665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_11_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:11:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:23:14 +0000   Fri, 15 Mar 2024 06:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    ha-866665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3eab3c085e414bb06a8b946d23d263
	  System UUID:                3e3eab3c-085e-414b-b06a-8b946d23d263
	  Boot ID:                    67c0c773-5540-4e63-8171-6ccf807dc545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-82knb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-mgthb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-r57px             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-866665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-9nvvx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-866665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-866665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-sbxgg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-866665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-866665                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 13m              kube-proxy       
	  Normal   Starting                 109s             kube-proxy       
	  Normal   NodeHasSufficientPID     14m              kubelet          Node ha-866665 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m              kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m              kubelet          Node ha-866665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m              kubelet          Node ha-866665 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m              kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m              node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   NodeReady                13m              kubelet          Node ha-866665 status is now: NodeReady
	  Normal   RegisteredNode           12m              node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           11m              node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Warning  ContainerGCFailed        3m (x2 over 4m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           101s             node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           96s              node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	  Normal   RegisteredNode           41s              node-controller  Node ha-866665 event: Registered Node ha-866665 in Controller
	
	
	Name:               ha-866665-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_12_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:12:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:25:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:24:00 +0000   Fri, 15 Mar 2024 06:23:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-866665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 58bd1411345f4ad89979a7572186fe49
	  System UUID:                58bd1411-345f-4ad8-9979-a7572186fe49
	  Boot ID:                    0ce5b345-fbd5-48ed-970b-3bf380d65432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sdxnc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-866665-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-26vqf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-866665-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-866665-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-lqzk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-866665-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-866665-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 106s                   kube-proxy       
	  Normal  RegisteredNode           12m                    node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  NodeNotReady             9m11s                  node-controller  Node ha-866665-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-866665-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-866665-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-866665-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                   node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	  Normal  RegisteredNode           41s                    node-controller  Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller
	
	
	Name:               ha-866665-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-866665-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=ha-866665
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_14_48_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:14:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-866665-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:18:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 06:15:18 +0000   Fri, 15 Mar 2024 06:24:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-866665-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba1c60db84af4e62b4dd3481111e694e
	  System UUID:                ba1c60db-84af-4e62-b4dd-3481111e694e
	  Boot ID:                    0376ead4-1240-436a-b9a9-8b12bb4d45e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j2vlf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-bq6md    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-866665-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-866665-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-866665-m04 status is now: NodeReady
	  Normal  RegisteredNode           101s               node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  RegisteredNode           96s                node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	  Normal  NodeNotReady             61s                node-controller  Node ha-866665-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           41s                node-controller  Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller
	
	
	==> dmesg <==
	[  +0.056814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054962] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.193593] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.117038] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.245141] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.806127] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059748] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.159068] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.996795] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:11] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"info","ts":"2024-03-15T06:24:02.517582Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.532034Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"bd5db29ca66a387","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T06:24:02.532145Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:02.537394Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"bd5db29ca66a387","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T06:24:02.537526Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:14.11881Z","caller":"traceutil/trace.go:171","msg":"trace[1095642664] transaction","detail":"{read_only:false; response_revision:2261; number_of_response:1; }","duration":"107.14777ms","start":"2024-03-15T06:24:14.011626Z","end":"2024-03-15T06:24:14.118774Z","steps":["trace[1095642664] 'process raft request'  (duration: 99.543453ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T06:24:50.843189Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.89:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-15T06:24:50.855663Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.89:36722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-15T06:24:50.874648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 switched to configuration voters=(9511011272858222243 12642734584227255827)"}
	{"level":"info","ts":"2024-03-15T06:24:50.874879Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","removed-remote-peer-id":"bd5db29ca66a387","removed-remote-peer-urls":["https://192.168.39.89:2380"]}
	{"level":"info","ts":"2024-03-15T06:24:50.874961Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.875449Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:50.875502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.87573Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:50.875834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:50.876013Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.876602Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387","error":"context canceled"}
	{"level":"warn","ts":"2024-03-15T06:24:50.876769Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"bd5db29ca66a387","error":"failed to read bd5db29ca66a387 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-15T06:24:50.877001Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.877652Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387","error":"context canceled"}
	{"level":"info","ts":"2024-03-15T06:24:50.877751Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:50.877881Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:24:50.877987Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"83fde65c75733ea3","removed-remote-peer-id":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.884117Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"83fde65c75733ea3","remote-peer-id-stream-handler":"83fde65c75733ea3","remote-peer-id-from":"bd5db29ca66a387"}
	{"level":"warn","ts":"2024-03-15T06:24:50.892177Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.89:33506","server-name":"","error":"read tcp 192.168.39.78:2380->192.168.39.89:33506: read: connection reset by peer"}
	
	
	==> etcd [adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435] <==
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-15T06:20:49.61568Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:20:49.61574Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:20:49.615908Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:20:49.616111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616182Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616308Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616452Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616527Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616603Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616661Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.61669Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616739Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616805Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617009Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617111Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617201Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.620924Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621045Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621081Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 06:25:05 up 14 min,  0 users,  load average: 0.74, 0.55, 0.35
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2] <==
	I0315 06:22:29.228074       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:22:29.621481       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:22:31.887860       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:22:32.888703       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:22:44.893486       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 06:22:50.321382       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe] <==
	I0315 06:24:35.687093       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:35.687164       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:35.687192       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:45.721842       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:45.721893       1 main.go:227] handling current node
	I0315 06:24:45.721905       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:45.721911       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:45.722013       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:45.722018       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:45.722059       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:45.722064       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:55.736721       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:55.736767       1 main.go:227] handling current node
	I0315 06:24:55.736779       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:55.736785       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:55.736905       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:55.736931       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:55.736991       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:55.737018       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:25:05.752125       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:25:05.752883       1 main.go:227] handling current node
	I0315 06:25:05.752904       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:25:05.752913       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:25:05.753452       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:25:05.753523       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2fe596c61a1030ef4b0e1b5badadc60af4c6c72488f9d715d3746f5b26eec13] <==
	I0315 06:22:34.008640       1 options.go:220] external host was not specified, using 192.168.39.78
	I0315 06:22:34.012893       1 server.go:148] Version: v1.28.4
	I0315 06:22:34.013441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:22:34.987333       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0315 06:22:35.012336       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0315 06:22:35.012394       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0315 06:22:35.012698       1 instance.go:298] Using reconciler: lease
	W0315 06:22:54.982973       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0315 06:22:54.985433       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0315 06:22:55.015495       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db] <==
	I0315 06:23:14.774469       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 06:23:14.797211       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:23:14.800478       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:23:14.801403       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0315 06:23:14.801470       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0315 06:23:14.846155       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 06:23:14.872622       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 06:23:14.877871       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 06:23:14.878077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 06:23:14.884210       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 06:23:14.885060       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 06:23:14.885097       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 06:23:14.909035       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 06:23:14.909093       1 aggregator.go:166] initial CRD sync complete...
	I0315 06:23:14.909114       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 06:23:14.909119       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 06:23:14.909125       1 cache.go:39] Caches are synced for autoregister controller
	I0315 06:23:14.930947       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0315 06:23:14.944567       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.89]
	I0315 06:23:14.961452       1 controller.go:624] quota admission added evaluator for: endpoints
	I0315 06:23:15.006374       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0315 06:23:15.029557       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0315 06:23:15.787689       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0315 06:23:16.485823       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.78 192.168.39.89]
	W0315 06:23:26.491061       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.78]
	
	
	==> kube-controller-manager [002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c] <==
	I0315 06:22:34.797179       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:22:35.042135       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:22:35.042305       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:22:35.044748       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:22:35.045354       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:22:35.045398       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:22:35.045423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:22:56.022068       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306] <==
	I0315 06:23:29.365111       1 event.go:307] "Event occurred" object="ha-866665-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m02 event: Registered Node ha-866665-m02 in Controller"
	I0315 06:23:29.365144       1 event.go:307] "Event occurred" object="ha-866665-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m03 event: Registered Node ha-866665-m03 in Controller"
	I0315 06:23:29.365153       1 event.go:307] "Event occurred" object="ha-866665-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665-m04 event: Registered Node ha-866665-m04 in Controller"
	I0315 06:23:29.365159       1 event.go:307] "Event occurred" object="ha-866665" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-866665 event: Registered Node ha-866665 in Controller"
	I0315 06:23:29.736766       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 06:23:29.736836       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0315 06:23:29.736916       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 06:23:55.619407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.467814ms"
	I0315 06:23:55.619642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.623µs"
	I0315 06:24:17.984045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.093666ms"
	I0315 06:24:17.984190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.784µs"
	I0315 06:24:47.527541       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-zk7fs"
	I0315 06:24:47.556852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.381974ms"
	I0315 06:24:47.618204       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-zk7fs"
	I0315 06:24:47.652674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="95.423206ms"
	I0315 06:24:47.704937       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-pr7r9"
	I0315 06:24:47.739829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="86.954446ms"
	I0315 06:24:47.752748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.80921ms"
	I0315 06:24:47.752861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.791µs"
	I0315 06:24:47.752960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.547µs"
	I0315 06:24:49.658640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.548µs"
	I0315 06:24:50.115142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="122.422µs"
	I0315 06:24:50.147212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.853µs"
	I0315 06:24:50.152512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="96.517µs"
	I0315 06:25:04.383464       1 event.go:307] "Event occurred" object="ha-866665-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-866665-m03 event: Removing Node ha-866665-m03 from Controller"
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	I0315 06:22:35.094334       1 server_others.go:69] "Using iptables proxy"
	E0315 06:22:38.033980       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:41.104106       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:44.177695       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:50.321628       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:22:59.539487       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:23:15.828877       1 node.go:141] Successfully retrieved node IP: 192.168.39.78
	I0315 06:23:15.901813       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:23:15.902012       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:23:15.915302       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:23:15.915863       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:23:15.916800       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:23:15.917093       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:23:15.920115       1 config.go:188] "Starting service config controller"
	I0315 06:23:15.920210       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:23:15.920400       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:23:15.920457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:23:15.921053       1 config.go:315] "Starting node config controller"
	I0315 06:23:15.921167       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:23:16.021446       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:23:16.021555       1 shared_informer.go:318] Caches are synced for node config
	I0315 06:23:16.021576       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0] <==
	E0315 06:19:42.930531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:42.930409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:42.930653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:46.002420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:46.002515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:52.148400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:52.148636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:04.433846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:04.434046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:16.720661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:16.721007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.937899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.938058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:44.369019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:44.369356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3] <==
	E0315 06:20:46.187160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.254890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:20:46.255001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:20:46.269076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.269195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.317565       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:20:46.317635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:20:46.544563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:20:46.544616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:20:46.741363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 06:20:46.741423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 06:20:46.762451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 06:20:46.762543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 06:20:46.876133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.876166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:47.365393       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:20:47.365501       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:20:47.451958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:20:47.452070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:20:47.587631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:20:47.587662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0315 06:20:49.515923       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:20:49.516074       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:20:49.519966       1 run.go:74] "command failed" err="finished without leader elect"
	I0315 06:20:49.520010       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	W0315 06:23:05.725626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:05.725696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:05.802921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:05.802992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:11.310012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:11.310150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.617979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.618154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.695622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.696158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.716039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.716075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:12.737033       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:23:12.737492       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:23:14.814160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:23:14.823942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:23:14.824443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:23:14.826409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:23:14.823833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:23:14.824501       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:23:14.823561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:23:14.828473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:23:14.828581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:23:14.828694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0315 06:23:34.236426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:23:31 ha-866665 kubelet[1369]: I0315 06:23:31.545640    1369 scope.go:117] "RemoveContainer" containerID="8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2"
	Mar 15 06:23:31 ha-866665 kubelet[1369]: E0315 06:23:31.546530    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:23:36 ha-866665 kubelet[1369]: I0315 06:23:36.249403    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-82knb" podStartSLOduration=564.519836319 podCreationTimestamp="2024-03-15 06:14:09 +0000 UTC" firstStartedPulling="2024-03-15 06:14:10.438649175 +0000 UTC m=+185.071777566" lastFinishedPulling="2024-03-15 06:14:13.168136751 +0000 UTC m=+187.801265151" observedRunningTime="2024-03-15 06:14:13.409107177 +0000 UTC m=+188.042235586" watchObservedRunningTime="2024-03-15 06:23:36.249323904 +0000 UTC m=+750.882452314"
	Mar 15 06:23:43 ha-866665 kubelet[1369]: I0315 06:23:43.544739    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:23:43 ha-866665 kubelet[1369]: E0315 06:23:43.545086    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:23:44 ha-866665 kubelet[1369]: I0315 06:23:44.544510    1369 scope.go:117] "RemoveContainer" containerID="8e97e91558ead2f68ddac96781e9f6ddf49d3115db6db7739b67d483652e25d2"
	Mar 15 06:23:56 ha-866665 kubelet[1369]: I0315 06:23:56.544678    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:23:56 ha-866665 kubelet[1369]: E0315 06:23:56.546129    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:05 ha-866665 kubelet[1369]: E0315 06:24:05.567838    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:24:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:24:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:24:11 ha-866665 kubelet[1369]: I0315 06:24:11.544731    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:11 ha-866665 kubelet[1369]: E0315 06:24:11.545282    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:22 ha-866665 kubelet[1369]: I0315 06:24:22.543789    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:22 ha-866665 kubelet[1369]: E0315 06:24:22.543999    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:33 ha-866665 kubelet[1369]: I0315 06:24:33.544155    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:24:33 ha-866665 kubelet[1369]: E0315 06:24:33.544623    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:24:45 ha-866665 kubelet[1369]: I0315 06:24:45.544310    1369 scope.go:117] "RemoveContainer" containerID="927c05bd830a5acd059718b30db0d3729a43750334da8b1780eaa8eb92316254"
	Mar 15 06:25:05 ha-866665 kubelet[1369]: E0315 06:25:05.572035    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:25:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:25:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:25:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:25:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:25:04.481592   32669 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:261: (dbg) Run:  kubectl --context ha-866665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5b5d89c9d6-6zlhn
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-866665 describe pod busybox-5b5d89c9d6-6zlhn
helpers_test.go:282: (dbg) kubectl --context ha-866665 describe pod busybox-5b5d89c9d6-6zlhn:

                                                
                                                
-- stdout --
	Name:             busybox-5b5d89c9d6-6zlhn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5b5d89c9d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5b5d89c9d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ld746 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-ld746:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (19.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (172.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 stop -v=7 --alsologtostderr
E0315 06:26:21.576903   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 stop -v=7 --alsologtostderr: exit status 82 (2m1.734415515s)

                                                
                                                
-- stdout --
	* Stopping node "ha-866665-m04"  ...
	* Stopping node "ha-866665-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:25:07.187928   32799 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:25:07.188033   32799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:25:07.188041   32799 out.go:304] Setting ErrFile to fd 2...
	I0315 06:25:07.188046   32799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:25:07.188259   32799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:25:07.188501   32799 out.go:298] Setting JSON to false
	I0315 06:25:07.188584   32799 mustload.go:65] Loading cluster: ha-866665
	I0315 06:25:07.188949   32799 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:25:07.189036   32799 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:25:07.189207   32799 mustload.go:65] Loading cluster: ha-866665
	I0315 06:25:07.189337   32799 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:25:07.189359   32799 stop.go:39] StopHost: ha-866665-m04
	I0315 06:25:07.189716   32799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:07.189763   32799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:07.204281   32799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0315 06:25:07.204769   32799 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:07.205387   32799 main.go:141] libmachine: Using API Version  1
	I0315 06:25:07.205416   32799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:07.205769   32799 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:07.208263   32799 out.go:177] * Stopping node "ha-866665-m04"  ...
	I0315 06:25:07.209937   32799 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 06:25:07.209980   32799 main.go:141] libmachine: (ha-866665-m04) Calling .DriverName
	I0315 06:25:07.210255   32799 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 06:25:07.210277   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHHostname
	I0315 06:25:07.213654   32799 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:07.214217   32799 main.go:141] libmachine: (ha-866665-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b6:a0", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:24:35 +0000 UTC Type:0 Mac:52:54:00:2e:b6:a0 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-866665-m04 Clientid:01:52:54:00:2e:b6:a0}
	I0315 06:25:07.214245   32799 main.go:141] libmachine: (ha-866665-m04) DBG | domain ha-866665-m04 has defined IP address 192.168.39.184 and MAC address 52:54:00:2e:b6:a0 in network mk-ha-866665
	I0315 06:25:07.214451   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHPort
	I0315 06:25:07.214673   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHKeyPath
	I0315 06:25:07.214843   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetSSHUsername
	I0315 06:25:07.215018   32799 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m04/id_rsa Username:docker}
	I0315 06:25:07.306447   32799 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 06:25:07.360784   32799 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	W0315 06:25:07.413545   32799 stop.go:55] failed to complete vm config backup (will continue): [failed to copy "/etc/kubernetes" to "/var/lib/minikube/backup" (will continue): sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup: Process exited with status 23
	stdout:
	
	stderr:
	rsync: [sender] link_stat "/etc/kubernetes" failed: No such file or directory (2)
	rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1336) [sender=3.2.7]
	]
	I0315 06:25:07.413595   32799 main.go:141] libmachine: Stopping "ha-866665-m04"...
	I0315 06:25:07.413613   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:25:07.415048   32799 main.go:141] libmachine: (ha-866665-m04) Calling .Stop
	I0315 06:25:07.418318   32799 main.go:141] libmachine: (ha-866665-m04) Waiting for machine to stop 0/120
	I0315 06:25:08.420348   32799 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:25:08.421658   32799 main.go:141] libmachine: Machine "ha-866665-m04" was stopped.
	I0315 06:25:08.421677   32799 stop.go:75] duration metric: took 1.21174269s to stop
	I0315 06:25:08.421694   32799 stop.go:39] StopHost: ha-866665-m02
	I0315 06:25:08.421967   32799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:25:08.422000   32799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:25:08.437153   32799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0315 06:25:08.437545   32799 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:25:08.438147   32799 main.go:141] libmachine: Using API Version  1
	I0315 06:25:08.438167   32799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:25:08.438504   32799 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:25:08.440895   32799 out.go:177] * Stopping node "ha-866665-m02"  ...
	I0315 06:25:08.442443   32799 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 06:25:08.442472   32799 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:25:08.442736   32799 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 06:25:08.442759   32799 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:25:08.445703   32799 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:08.446345   32799 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:22:39 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:25:08.446380   32799 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:25:08.446574   32799 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:25:08.446782   32799 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:25:08.446933   32799 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:25:08.447093   32799 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:25:08.545711   32799 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 06:25:08.600894   32799 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 06:25:08.655504   32799 main.go:141] libmachine: Stopping "ha-866665-m02"...
	I0315 06:25:08.655535   32799 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:25:08.657025   32799 main.go:141] libmachine: (ha-866665-m02) Calling .Stop
	I0315 06:25:08.660446   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 0/120
	I0315 06:25:09.662172   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 1/120
	I0315 06:25:10.663948   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 2/120
	I0315 06:25:11.665391   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 3/120
	I0315 06:25:12.666765   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 4/120
	I0315 06:25:13.668570   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 5/120
	I0315 06:25:14.670123   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 6/120
	I0315 06:25:15.671791   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 7/120
	I0315 06:25:16.673246   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 8/120
	I0315 06:25:17.674873   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 9/120
	I0315 06:25:18.676826   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 10/120
	I0315 06:25:19.678993   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 11/120
	I0315 06:25:20.680282   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 12/120
	I0315 06:25:21.681794   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 13/120
	I0315 06:25:22.683100   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 14/120
	I0315 06:25:23.684938   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 15/120
	I0315 06:25:24.686952   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 16/120
	I0315 06:25:25.688270   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 17/120
	I0315 06:25:26.690042   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 18/120
	I0315 06:25:27.692000   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 19/120
	I0315 06:25:28.693931   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 20/120
	I0315 06:25:29.695417   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 21/120
	I0315 06:25:30.697057   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 22/120
	I0315 06:25:31.699033   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 23/120
	I0315 06:25:32.700830   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 24/120
	I0315 06:25:33.702701   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 25/120
	I0315 06:25:34.704437   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 26/120
	I0315 06:25:35.706070   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 27/120
	I0315 06:25:36.707757   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 28/120
	I0315 06:25:37.709743   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 29/120
	I0315 06:25:38.711979   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 30/120
	I0315 06:25:39.714600   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 31/120
	I0315 06:25:40.717111   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 32/120
	I0315 06:25:41.719212   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 33/120
	I0315 06:25:42.720804   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 34/120
	I0315 06:25:43.722671   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 35/120
	I0315 06:25:44.724226   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 36/120
	I0315 06:25:45.725843   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 37/120
	I0315 06:25:46.727900   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 38/120
	I0315 06:25:47.729322   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 39/120
	I0315 06:25:48.730826   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 40/120
	I0315 06:25:49.732351   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 41/120
	I0315 06:25:50.733960   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 42/120
	I0315 06:25:51.735391   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 43/120
	I0315 06:25:52.736869   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 44/120
	I0315 06:25:53.738815   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 45/120
	I0315 06:25:54.740177   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 46/120
	I0315 06:25:55.741370   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 47/120
	I0315 06:25:56.742920   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 48/120
	I0315 06:25:57.744305   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 49/120
	I0315 06:25:58.746371   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 50/120
	I0315 06:25:59.747690   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 51/120
	I0315 06:26:00.749150   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 52/120
	I0315 06:26:01.750512   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 53/120
	I0315 06:26:02.751865   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 54/120
	I0315 06:26:03.753437   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 55/120
	I0315 06:26:04.754846   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 56/120
	I0315 06:26:05.756876   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 57/120
	I0315 06:26:06.758778   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 58/120
	I0315 06:26:07.760147   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 59/120
	I0315 06:26:08.762101   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 60/120
	I0315 06:26:09.763756   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 61/120
	I0315 06:26:10.765260   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 62/120
	I0315 06:26:11.766754   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 63/120
	I0315 06:26:12.769170   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 64/120
	I0315 06:26:13.770951   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 65/120
	I0315 06:26:14.772248   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 66/120
	I0315 06:26:15.773837   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 67/120
	I0315 06:26:16.775509   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 68/120
	I0315 06:26:17.777142   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 69/120
	I0315 06:26:18.779016   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 70/120
	I0315 06:26:19.780374   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 71/120
	I0315 06:26:20.781823   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 72/120
	I0315 06:26:21.783206   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 73/120
	I0315 06:26:22.785007   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 74/120
	I0315 06:26:23.786819   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 75/120
	I0315 06:26:24.788258   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 76/120
	I0315 06:26:25.789939   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 77/120
	I0315 06:26:26.791718   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 78/120
	I0315 06:26:27.793258   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 79/120
	I0315 06:26:28.795224   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 80/120
	I0315 06:26:29.796818   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 81/120
	I0315 06:26:30.798329   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 82/120
	I0315 06:26:31.799834   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 83/120
	I0315 06:26:32.801848   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 84/120
	I0315 06:26:33.803802   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 85/120
	I0315 06:26:34.805295   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 86/120
	I0315 06:26:35.806891   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 87/120
	I0315 06:26:36.808456   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 88/120
	I0315 06:26:37.810223   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 89/120
	I0315 06:26:38.812245   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 90/120
	I0315 06:26:39.813444   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 91/120
	I0315 06:26:40.814894   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 92/120
	I0315 06:26:41.816228   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 93/120
	I0315 06:26:42.817871   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 94/120
	I0315 06:26:43.819574   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 95/120
	I0315 06:26:44.821200   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 96/120
	I0315 06:26:45.822930   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 97/120
	I0315 06:26:46.824135   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 98/120
	I0315 06:26:47.825762   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 99/120
	I0315 06:26:48.827854   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 100/120
	I0315 06:26:49.829199   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 101/120
	I0315 06:26:50.830687   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 102/120
	I0315 06:26:51.832051   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 103/120
	I0315 06:26:52.833558   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 104/120
	I0315 06:26:53.835234   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 105/120
	I0315 06:26:54.836561   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 106/120
	I0315 06:26:55.838196   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 107/120
	I0315 06:26:56.839533   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 108/120
	I0315 06:26:57.841082   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 109/120
	I0315 06:26:58.843121   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 110/120
	I0315 06:26:59.844599   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 111/120
	I0315 06:27:00.846083   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 112/120
	I0315 06:27:01.847429   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 113/120
	I0315 06:27:02.849030   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 114/120
	I0315 06:27:03.850910   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 115/120
	I0315 06:27:04.852572   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 116/120
	I0315 06:27:05.854364   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 117/120
	I0315 06:27:06.855888   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 118/120
	I0315 06:27:07.857396   32799 main.go:141] libmachine: (ha-866665-m02) Waiting for machine to stop 119/120
	I0315 06:27:08.857990   32799 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 06:27:08.858048   32799 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 06:27:08.860074   32799 out.go:177] 
	W0315 06:27:08.861535   32799 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 06:27:08.861556   32799 out.go:239] * 
	* 
	W0315 06:27:08.863727   32799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 06:27:08.865207   32799 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-866665 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr: exit status 7 (33.780660796s)

                                                
                                                
-- stdout --
	ha-866665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-866665-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-866665-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:27:08.923155   33183 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:27:08.923266   33183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:27:08.923275   33183 out.go:304] Setting ErrFile to fd 2...
	I0315 06:27:08.923280   33183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:27:08.923474   33183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:27:08.923689   33183 out.go:298] Setting JSON to false
	I0315 06:27:08.923714   33183 mustload.go:65] Loading cluster: ha-866665
	I0315 06:27:08.923829   33183 notify.go:220] Checking for updates...
	I0315 06:27:08.924154   33183 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:27:08.924171   33183 status.go:255] checking status of ha-866665 ...
	I0315 06:27:08.924808   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:08.924854   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:08.945678   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0315 06:27:08.946073   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:08.946738   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:08.946762   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:08.947080   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:08.947284   33183 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:27:08.948956   33183 status.go:330] ha-866665 host status = "Running" (err=<nil>)
	I0315 06:27:08.948972   33183 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:27:08.949238   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:08.949278   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:08.963566   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0315 06:27:08.964037   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:08.964441   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:08.964520   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:08.964851   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:08.965043   33183 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:27:08.968124   33183 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:27:08.968732   33183 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:27:08.968765   33183 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:27:08.968915   33183 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:27:08.969265   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:08.969308   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:08.983562   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0315 06:27:08.984043   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:08.984568   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:08.984586   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:08.984934   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:08.985120   33183 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:27:08.985320   33183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:27:08.985351   33183 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:27:08.987959   33183 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:27:08.988336   33183 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:27:08.988355   33183 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:27:08.988479   33183 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:27:08.988668   33183 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:27:08.988810   33183 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:27:08.988947   33183 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:27:09.070430   33183 ssh_runner.go:195] Run: systemctl --version
	I0315 06:27:09.077349   33183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:27:09.094615   33183 kubeconfig.go:125] found "ha-866665" server: "https://192.168.39.254:8443"
	I0315 06:27:09.094646   33183 api_server.go:166] Checking apiserver status ...
	I0315 06:27:09.094698   33183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:27:09.110445   33183 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6126/cgroup
	W0315 06:27:09.123201   33183 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:27:09.123257   33183 ssh_runner.go:195] Run: ls
	I0315 06:27:09.128135   33183 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:27:11.857124   33183 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:27:11.857188   33183 retry.go:31] will retry after 273.508109ms: state is "Stopped"
	I0315 06:27:12.131742   33183 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:27:14.929172   33183 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:27:14.929222   33183 retry.go:31] will retry after 375.268245ms: state is "Stopped"
	I0315 06:27:15.304759   33183 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:27:18.001316   33183 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:27:18.001386   33183 retry.go:31] will retry after 303.522201ms: state is "Stopped"
	I0315 06:27:18.305939   33183 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:27:21.060788   33183 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:27:21.060832   33183 retry.go:31] will retry after 465.640593ms: state is "Stopped"
	I0315 06:27:21.527569   33183 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 06:27:24.132812   33183 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:27:24.132858   33183 status.go:422] ha-866665 apiserver status = Running (err=<nil>)
	I0315 06:27:24.132866   33183 status.go:257] ha-866665 status: &{Name:ha-866665 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:27:24.132910   33183 status.go:255] checking status of ha-866665-m02 ...
	I0315 06:27:24.133306   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:24.133354   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:24.149332   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0315 06:27:24.149779   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:24.150284   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:24.150298   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:24.150604   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:24.150822   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:27:24.152642   33183 status.go:330] ha-866665-m02 host status = "Running" (err=<nil>)
	I0315 06:27:24.152656   33183 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:27:24.152923   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:24.152956   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:24.167348   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34961
	I0315 06:27:24.167829   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:24.168272   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:24.168296   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:24.168632   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:24.168798   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetIP
	I0315 06:27:24.171542   33183 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:27:24.171975   33183 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:22:39 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:27:24.171999   33183 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:27:24.172124   33183 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:27:24.172403   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:24.172435   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:24.186954   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0315 06:27:24.187270   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:24.187692   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:24.187712   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:24.188000   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:24.188202   33183 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:27:24.188366   33183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:27:24.188387   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:27:24.190885   33183 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:27:24.191207   33183 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:22:39 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:27:24.191230   33183 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:27:24.191355   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:27:24.191516   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:27:24.191673   33183 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:27:24.191796   33183 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	W0315 06:27:42.628736   33183 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0315 06:27:42.628848   33183 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0315 06:27:42.628868   33183 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:27:42.628878   33183 status.go:257] ha-866665-m02 status: &{Name:ha-866665-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 06:27:42.628907   33183 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0315 06:27:42.628917   33183 status.go:255] checking status of ha-866665-m04 ...
	I0315 06:27:42.629233   33183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:27:42.629286   33183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:27:42.643888   33183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0315 06:27:42.644373   33183 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:27:42.644839   33183 main.go:141] libmachine: Using API Version  1
	I0315 06:27:42.644866   33183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:27:42.645184   33183 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:27:42.645371   33183 main.go:141] libmachine: (ha-866665-m04) Calling .GetState
	I0315 06:27:42.646902   33183 status.go:330] ha-866665-m04 host status = "Stopped" (err=<nil>)
	I0315 06:27:42.646915   33183 status.go:343] host is not running, skipping remaining checks
	I0315 06:27:42.646923   33183 status.go:257] ha-866665-m04 status: &{Name:ha-866665-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr": ha-866665
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-866665-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-866665-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr": ha-866665
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-866665-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-866665-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr": ha-866665
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-866665-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-866665-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665: exit status 2 (15.596951639s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.48861449s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	| node    | ha-866665 node delete m03 -v=7                                                   | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC | 15 Mar 24 06:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-866665 stop -v=7                                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:20:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:20:48.233395   31266 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:20:48.233693   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233703   31266 out.go:304] Setting ErrFile to fd 2...
	I0315 06:20:48.233707   31266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:20:48.233974   31266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:20:48.234536   31266 out.go:298] Setting JSON to false
	I0315 06:20:48.235411   31266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3745,"bootTime":1710479904,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:20:48.235482   31266 start.go:139] virtualization: kvm guest
	I0315 06:20:48.238560   31266 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:20:48.240218   31266 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:20:48.240225   31266 notify.go:220] Checking for updates...
	I0315 06:20:48.241922   31266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:20:48.243333   31266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:20:48.244647   31266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:20:48.245904   31266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:20:48.247270   31266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:20:48.249101   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:48.249189   31266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:20:48.249650   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.249692   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.264611   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0315 06:20:48.265138   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.265713   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.265743   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.266115   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.266310   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.301775   31266 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:20:48.303090   31266 start.go:297] selected driver: kvm2
	I0315 06:20:48.303107   31266 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.303243   31266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:20:48.303557   31266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.303624   31266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:20:48.318165   31266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:20:48.318839   31266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:20:48.318901   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:20:48.318914   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:20:48.318976   31266 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:20:48.319098   31266 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:20:48.320866   31266 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:20:48.322118   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:20:48.322150   31266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:20:48.322163   31266 cache.go:56] Caching tarball of preloaded images
	I0315 06:20:48.322262   31266 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:20:48.322275   31266 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:20:48.322412   31266 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:20:48.322603   31266 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:20:48.322644   31266 start.go:364] duration metric: took 25.657µs to acquireMachinesLock for "ha-866665"
	I0315 06:20:48.322657   31266 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:20:48.322667   31266 fix.go:54] fixHost starting: 
	I0315 06:20:48.322903   31266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:20:48.322934   31266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:20:48.337122   31266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0315 06:20:48.337522   31266 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:20:48.337966   31266 main.go:141] libmachine: Using API Version  1
	I0315 06:20:48.337984   31266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:20:48.338306   31266 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:20:48.338487   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.338668   31266 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:20:48.340290   31266 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:20:48.340310   31266 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:20:48.342346   31266 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:20:48.343623   31266 machine.go:94] provisionDockerMachine start ...
	I0315 06:20:48.343641   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:20:48.343821   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.346289   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346782   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.346824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.346966   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.347119   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347285   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.347418   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.347544   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.347724   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.347735   31266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:20:48.450351   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.450383   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450661   31266 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:20:48.450684   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.450849   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.453380   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453790   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.453818   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.453891   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.454090   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454251   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.454383   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.454547   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.454720   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.454732   31266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:20:48.576850   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:20:48.576878   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.579606   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.579972   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.580005   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.580121   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:48.580306   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580483   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:48.580636   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:48.580815   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:48.581041   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:48.581065   31266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:20:48.682862   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:20:48.682887   31266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:20:48.682906   31266 buildroot.go:174] setting up certificates
	I0315 06:20:48.682935   31266 provision.go:84] configureAuth start
	I0315 06:20:48.682950   31266 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:20:48.683239   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:20:48.686023   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686417   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.686450   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.686552   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:48.688525   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.688908   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:48.688934   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:48.689080   31266 provision.go:143] copyHostCerts
	I0315 06:20:48.689110   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689138   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:20:48.689146   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:20:48.689206   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:20:48.689286   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689314   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:20:48.689321   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:20:48.689345   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:20:48.689388   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689404   31266 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:20:48.689410   31266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:20:48.689430   31266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:20:48.689471   31266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:20:49.237189   31266 provision.go:177] copyRemoteCerts
	I0315 06:20:49.237247   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:20:49.237269   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.239856   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240163   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.240195   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.240300   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.240501   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.240683   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.240845   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:20:49.320109   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:20:49.320179   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:20:49.347303   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:20:49.347368   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:20:49.373709   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:20:49.373780   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:20:49.400806   31266 provision.go:87] duration metric: took 717.857802ms to configureAuth
	I0315 06:20:49.400834   31266 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:20:49.401098   31266 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:20:49.401246   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:20:49.404071   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404492   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:20:49.404524   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:20:49.404710   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:20:49.404892   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405052   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:20:49.405236   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:20:49.405428   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:20:49.405641   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:20:49.405663   31266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:22:20.418848   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:22:20.418872   31266 machine.go:97] duration metric: took 1m32.075236038s to provisionDockerMachine
	I0315 06:22:20.418884   31266 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:22:20.418893   31266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:22:20.418908   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.419251   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:22:20.419276   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.422223   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422630   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.422653   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.422780   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.422931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.423065   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.423242   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.505795   31266 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:22:20.510297   31266 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:22:20.510324   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:22:20.510382   31266 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:22:20.510451   31266 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:22:20.510461   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:22:20.510550   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:22:20.521122   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:20.547933   31266 start.go:296] duration metric: took 129.036646ms for postStartSetup
	I0315 06:22:20.547978   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.548256   31266 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:22:20.548281   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.550824   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551345   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.551367   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.551588   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.551778   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.551927   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.552071   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:22:20.631948   31266 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:22:20.631985   31266 fix.go:56] duration metric: took 1m32.309321607s for fixHost
	I0315 06:22:20.632007   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.635221   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635666   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.635698   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.635839   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.636059   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636205   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.636327   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.636488   31266 main.go:141] libmachine: Using SSH client type: native
	I0315 06:22:20.636663   31266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:22:20.636675   31266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:22:20.737851   31266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710483740.705795243
	
	I0315 06:22:20.737873   31266 fix.go:216] guest clock: 1710483740.705795243
	I0315 06:22:20.737880   31266 fix.go:229] Guest: 2024-03-15 06:22:20.705795243 +0000 UTC Remote: 2024-03-15 06:22:20.631992794 +0000 UTC m=+92.446679747 (delta=73.802449ms)
	I0315 06:22:20.737903   31266 fix.go:200] guest clock delta is within tolerance: 73.802449ms
	I0315 06:22:20.737909   31266 start.go:83] releasing machines lock for "ha-866665", held for 1m32.415256417s
	I0315 06:22:20.737929   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.738195   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:20.741307   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.741994   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.742025   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.742221   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.742829   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743040   31266 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:22:20.743136   31266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:22:20.743200   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.743336   31266 ssh_runner.go:195] Run: cat /version.json
	I0315 06:22:20.743366   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:22:20.746043   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746264   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746484   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746514   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746631   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.746767   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:20.746784   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:20.746801   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.746931   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:22:20.747000   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747060   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:22:20.747123   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.747171   31266 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:22:20.747308   31266 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:22:20.822170   31266 ssh_runner.go:195] Run: systemctl --version
	I0315 06:22:20.864338   31266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:22:21.034553   31266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:22:21.041415   31266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:22:21.041490   31266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:22:21.051566   31266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:22:21.051586   31266 start.go:494] detecting cgroup driver to use...
	I0315 06:22:21.051648   31266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:22:21.068910   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:22:21.083923   31266 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:22:21.083988   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:22:21.099367   31266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:22:21.114470   31266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:22:21.261920   31266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:22:21.413984   31266 docker.go:233] disabling docker service ...
	I0315 06:22:21.414050   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:22:21.432166   31266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:22:21.446453   31266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:22:21.603068   31266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:22:21.758747   31266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:22:21.773638   31266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:22:21.795973   31266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:22:21.796067   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.809281   31266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:22:21.809373   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.820969   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.832684   31266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:22:21.843891   31266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:22:21.855419   31266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:22:21.867162   31266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:22:21.877235   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:22.024876   31266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:22:27.210727   31266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.185810009s)
	I0315 06:22:27.210754   31266 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:22:27.210796   31266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:22:27.215990   31266 start.go:562] Will wait 60s for crictl version
	I0315 06:22:27.216039   31266 ssh_runner.go:195] Run: which crictl
	I0315 06:22:27.219900   31266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:22:27.261162   31266 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:22:27.261285   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.294548   31266 ssh_runner.go:195] Run: crio --version
	I0315 06:22:27.328151   31266 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:22:27.329667   31266 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:22:27.332373   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.332800   31266 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:22:27.332816   31266 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:22:27.333023   31266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:22:27.338097   31266 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:22:27.338218   31266 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:22:27.338265   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.384063   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.384086   31266 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:22:27.384141   31266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:22:27.423578   31266 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:22:27.423601   31266 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:22:27.423609   31266 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:22:27.423697   31266 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:22:27.423756   31266 ssh_runner.go:195] Run: crio config
	I0315 06:22:27.482626   31266 cni.go:84] Creating CNI manager for ""
	I0315 06:22:27.482649   31266 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 06:22:27.482662   31266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:22:27.482691   31266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:22:27.482834   31266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:22:27.482850   31266 kube-vip.go:111] generating kube-vip config ...
	I0315 06:22:27.482886   31266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:22:27.497074   31266 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:22:27.497204   31266 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:22:27.497284   31266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:22:27.509195   31266 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:22:27.509286   31266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:22:27.520191   31266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:22:27.538135   31266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:22:27.555610   31266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:22:27.573955   31266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:22:27.593596   31266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:22:27.598156   31266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:22:27.747192   31266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:22:27.764301   31266 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:22:27.764333   31266 certs.go:194] generating shared ca certs ...
	I0315 06:22:27.764355   31266 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.764534   31266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:22:27.764615   31266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:22:27.764630   31266 certs.go:256] generating profile certs ...
	I0315 06:22:27.764730   31266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:22:27.764765   31266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68
	I0315 06:22:27.764786   31266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.89 192.168.39.254]
	I0315 06:22:27.902249   31266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 ...
	I0315 06:22:27.902281   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68: {Name:mk4ec3568f719ba46ca54f4c420840c2b2fdca4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902456   31266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 ...
	I0315 06:22:27.902473   31266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68: {Name:mka2b45e463d67423a36473df143eb634ee13f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:22:27.902571   31266 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:22:27.902733   31266 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.421fcf68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:22:27.902906   31266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:22:27.902923   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:22:27.902942   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:22:27.902957   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:22:27.902977   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:22:27.903001   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:22:27.903021   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:22:27.903035   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:22:27.903050   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:22:27.903117   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:22:27.903157   31266 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:22:27.903170   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:22:27.903219   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:22:27.903252   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:22:27.903289   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:22:27.903350   31266 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:22:27.903416   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:22:27.903454   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:22:27.903473   31266 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:27.904019   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:22:27.931140   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:22:27.956928   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:22:27.981629   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:22:28.007100   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 06:22:28.032763   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:22:28.057851   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:22:28.086521   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:22:28.112212   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:22:28.139218   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:22:28.164931   31266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:22:28.191225   31266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:22:28.209099   31266 ssh_runner.go:195] Run: openssl version
	I0315 06:22:28.215089   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:22:28.226199   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230951   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.230998   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:22:28.237257   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:22:28.247307   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:22:28.258550   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263269   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.263323   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:22:28.269418   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:22:28.283320   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:22:28.347367   31266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358725   31266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.358796   31266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:22:28.386093   31266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:22:28.404000   31266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:22:28.431851   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:22:28.439647   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:22:28.452233   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:22:28.464741   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:22:28.479804   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:22:28.488488   31266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:22:28.494840   31266 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:22:28.495020   31266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:22:28.495101   31266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:22:28.554274   31266 cri.go:89] found id: "c6fb756ec96d63a35d3d309d8f9f0e4b3ba437bc3e2ab9b64aeedaefae913df8"
	I0315 06:22:28.554299   31266 cri.go:89] found id: "dcdaf40ca56142d0131435198e249e6b4f6618b31356b7d2753d5ef5312de8d5"
	I0315 06:22:28.554305   31266 cri.go:89] found id: "c0c01dd7f22bdefe39bea117843425c533e95d702153aba98239f622ad3c5ff2"
	I0315 06:22:28.554310   31266 cri.go:89] found id: "9b4a5b482d487e39ba565da240819c12b69d88ec3854e05cc308a1d7226aaa46"
	I0315 06:22:28.554314   31266 cri.go:89] found id: "21104767a93711e1cc7d9b753ddee37153899343373cd94a77acea5c45066aa0"
	I0315 06:22:28.554317   31266 cri.go:89] found id: "652c2ee94f6f3c21120c9d09dac563cb0461e36418fec1d03c7ae44d2f1d5855"
	I0315 06:22:28.554322   31266 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:22:28.554325   31266 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:22:28.554329   31266 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:22:28.554336   31266 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:22:28.554340   31266 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:22:28.554343   31266 cri.go:89] found id: "b639b306bcc41cf83b348e17cd0d1f53a5b85cb7d8fee1cc5cfc225e971d5551"
	I0315 06:22:28.554348   31266 cri.go:89] found id: "dddbd40f934ba2a9f899464752e32115d55f26cd3ea23ab88e391067fce42323"
	I0315 06:22:28.554351   31266 cri.go:89] found id: ""
	I0315 06:22:28.554403   31266 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.601076016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484078601053203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2757b4e4-4460-40ee-b563-8712aeb1ff5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.601807315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de44ebc6-38d2-4979-8775-b90808fd0cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.601863595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de44ebc6-38d2-4979-8775-b90808fd0cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.602402285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710484072562161342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956838437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483971559010457,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef22
1780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSan
dboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":915
3,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd
5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214a
ad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b6
3881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_
EXITED,CreatedAt:1710483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de44ebc6-38d2-4979-8775-b90808fd0cc6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.646534055Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a56625ef-af82-40ae-8f1f-3852c7ff169d name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.646609527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a56625ef-af82-40ae-8f1f-3852c7ff169d name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.647626703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac823748-e7e6-4407-996e-9422adf7d083 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.648063041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484078648039479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac823748-e7e6-4407-996e-9422adf7d083 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.648632958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d1571ac-e4a3-4116-9791-34fb6077c1a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.648693269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d1571ac-e4a3-4116-9791-34fb6077c1a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.649076364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710484072562161342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956838437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483971559010457,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef22
1780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSan
dboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":915
3,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd
5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214a
ad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b6
3881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_
EXITED,CreatedAt:1710483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d1571ac-e4a3-4116-9791-34fb6077c1a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.694610037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a01a6acb-f15d-479f-9ebc-8311bb03d486 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.694706716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a01a6acb-f15d-479f-9ebc-8311bb03d486 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.696072793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae1d0808-3ea2-4829-b1a8-dedcbd2141c3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.696744043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484078696716676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae1d0808-3ea2-4829-b1a8-dedcbd2141c3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.697339866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c454d65-64cc-4bd3-a0f0-8261efd4b15d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.697399320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c454d65-64cc-4bd3-a0f0-8261efd4b15d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.697804892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710484072562161342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956838437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483971559010457,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef22
1780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSan
dboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":915
3,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd
5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214a
ad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b6
3881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_
EXITED,CreatedAt:1710483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c454d65-64cc-4bd3-a0f0-8261efd4b15d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.742466405Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ad083f0-bd50-4b3b-977c-cf2ed007288a name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.742538930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ad083f0-bd50-4b3b-977c-cf2ed007288a name=/runtime.v1.RuntimeService/Version
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.743604452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04019c39-48d4-4dcb-a87b-2bcd739f9387 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.744090060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484078744064321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04019c39-48d4-4dcb-a87b-2bcd739f9387 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.744883205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b18765ec-f6a9-46a6-af1a-6218ea936f35 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.744965158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b18765ec-f6a9-46a6-af1a-6218ea936f35 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:27:58 ha-866665 crio[3914]: time="2024-03-15 06:27:58.745489140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710484072562161342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956838437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710483971559010457,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb,PodSandboxId:95c517450cdc3e0d30c7dceb1398e8af4b54bb7ffd9532cb14e4af5992c587e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710483885586634022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe,PodSandboxId:c7f6cbdff0a6d1d8d88689fa64d2eba0992fb6361039a313f626d8a404e29e91,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710483824558998890,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710483797574121372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710483786887271368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710483753567181753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950153b4c9efe4316e3c3891bb3ef22
1780f0fe05967f5dd112e0b11f5c73088,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483753567471562,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSan
dboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483753601153880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710483753494970387,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c,PodSandboxId:70721076f18d9cddd42a8dea0197ac107180eec342e87d7a593bd1356fd21ff7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710483753529055200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710483753556447381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710483748540106742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":915
3,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3893d7b08f562cae4177f840f3362d17b6cb882ab7084901b9df0f1a181c0326,PodSandboxId:4b1a833979698f5b140afd51b3f12daf851fcc040d463e81fda7d4f72fe604c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483253186987775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7b2cc69f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90,PodSandboxId:72c22c098aee5b777c63895a9a7a112a0062d512c5293f16be73fc3bff128525,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083739905873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780,PodSandboxId:89474c22140606aaae1f846d0d5a614c6bf457b341fb31147f33ecfe50ac823f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483083764358823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd
5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0,PodSandboxId:e15b87fb1896f146c4846255157fab51aaf89ec0a37160e9d2cf20e54b46d709,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214a
ad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483078219845736,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3,PodSandboxId:97bf2aa8738ce339fe47f4c1c49f7c4a18b7f68c8ffc1d0b4afb30de19114964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b6
3881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483058653682516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435,PodSandboxId:682c38a8f4263d0ce5591306eff5d20c580ae1d8c71c7aa9293f00021e437cae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_
EXITED,CreatedAt:1710483058632020970,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b18765ec-f6a9-46a6-af1a-6218ea936f35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e09471036e57d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 seconds ago        Running             kindnet-cni               4                   c7f6cbdff0a6d       kindnet-9nvvx
	e4370ef8479c8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Exited              kube-apiserver            4                   f7b655acbd708       kube-apiserver-ha-866665
	cb4635f3b41c2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      About a minute ago   Running             kube-vip                  4                   b3fef0e73d7bb       kube-vip-ha-866665
	5eaa4a539d19c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       5                   95c517450cdc3       storage-provisioner
	a632d3a2baa85       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago        Exited              kindnet-cni               3                   c7f6cbdff0a6d       kindnet-9nvvx
	e490c56eb4c5d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago        Running             kube-controller-manager   2                   70721076f18d9       kube-controller-manager-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago        Running             busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago        Running             coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	950153b4c9efe       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  3                   b3fef0e73d7bb       kube-vip-ha-866665
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago        Running             kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago        Running             etcd                      1                   79337bac30908       etcd-ha-866665
	002360447d19f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago        Exited              kube-controller-manager   1                   70721076f18d9       kube-controller-manager-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago        Running             kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago        Running             coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	3893d7b08f562       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago       Exited              busybox                   0                   4b1a833979698       busybox-5b5d89c9d6-82knb
	bede6c7f8912b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago       Exited              coredns                   0                   89474c2214060       coredns-5dd5756b68-r57px
	c0ecd2e858892       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago       Exited              coredns                   0                   72c22c098aee5       coredns-5dd5756b68-mgthb
	c07640cff4ced       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago       Exited              kube-proxy                0                   e15b87fb1896f       kube-proxy-sbxgg
	7fcd79ed43f7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      17 minutes ago       Exited              kube-scheduler            0                   97bf2aa8738ce       kube-scheduler-ha-866665
	adc8145247000       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      17 minutes ago       Exited              etcd                      0                   682c38a8f4263       etcd-ha-866665
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[800224354]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:09.663) (total time: 12252ms):
	Trace[800224354]: ---"Objects listed" error:Unauthorized 12252ms (06:27:21.915)
	Trace[800224354]: [12.252932189s] [12.252932189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[532336764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:10.595) (total time: 11322ms):
	Trace[532336764]: ---"Objects listed" error:Unauthorized 11321ms (06:27:21.916)
	Trace[532336764]: [11.322096854s] [11.322096854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1149679676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:25.734) (total time: 10173ms):
	Trace[1149679676]: ---"Objects listed" error:Unauthorized 10171ms (06:27:35.906)
	Trace[1149679676]: [10.173827374s] [10.173827374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	
	
	==> coredns [bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780] <==
	[INFO] 10.244.0.4:38164 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009591847s
	[INFO] 10.244.1.2:58652 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000766589s
	[INFO] 10.244.1.2:51069 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001862794s
	[INFO] 10.244.0.4:39512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00055199s
	[INFO] 10.244.0.4:46188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133209s
	[INFO] 10.244.0.4:45008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008468s
	[INFO] 10.244.0.4:37076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097079s
	[INFO] 10.244.1.2:45388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815413s
	[INFO] 10.244.1.2:40983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165928s
	[INFO] 10.244.1.2:41822 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199064s
	[INFO] 10.244.1.2:51003 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093469s
	[INFO] 10.244.2.2:52723 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155039s
	[INFO] 10.244.2.2:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105876s
	[INFO] 10.244.2.2:40110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118647s
	[INFO] 10.244.1.2:48735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190723s
	[INFO] 10.244.1.2:59420 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115761s
	[INFO] 10.244.1.2:44465 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090898s
	[INFO] 10.244.2.2:55054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145748s
	[INFO] 10.244.2.2:48352 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081059s
	[INFO] 10.244.0.4:53797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115756s
	[INFO] 10.244.0.4:52841 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114315s
	[INFO] 10.244.1.2:34071 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158733s
	[INFO] 10.244.2.2:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239839s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90] <==
	[INFO] 10.244.2.2:48404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148272s
	[INFO] 10.244.2.2:45614 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171944s
	[INFO] 10.244.2.2:42730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012501s
	[INFO] 10.244.2.2:38361 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605049s
	[INFO] 10.244.2.2:54334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:51787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138576s
	[INFO] 10.244.0.4:35351 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081934s
	[INFO] 10.244.0.4:56185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140731s
	[INFO] 10.244.0.4:49966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062146s
	[INFO] 10.244.1.2:35089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123543s
	[INFO] 10.244.2.2:59029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184488s
	[INFO] 10.244.2.2:57369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103045s
	[INFO] 10.244.0.4:37219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243853s
	[INFO] 10.244.0.4:39054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129011s
	[INFO] 10.244.1.2:38863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000321539s
	[INFO] 10.244.1.2:42772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125764s
	[INFO] 10.244.1.2:50426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114767s
	[INFO] 10.244.2.2:48400 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140476s
	[INFO] 10.244.2.2:47852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177728s
	[INFO] 10.244.2.2:44657 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185799s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	Trace[892228593]: [12.602258056s] [12.602258056s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[468223666]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.425) (total time: 11492ms):
	Trace[468223666]: ---"Objects listed" error:Unauthorized 11491ms (06:27:35.917)
	Trace[468223666]: [11.49243538s] [11.49243538s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[652911396]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.712) (total time: 11205ms):
	Trace[652911396]: ---"Objects listed" error:Unauthorized 11205ms (06:27:35.918)
	Trace[652911396]: [11.205453768s] [11.205453768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[659281961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.824) (total time: 11093ms):
	Trace[659281961]: ---"Objects listed" error:Unauthorized 11093ms (06:27:35.918)
	Trace[659281961]: [11.093964434s] [11.093964434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.056814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054962] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.193593] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.117038] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.245141] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.806127] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.059748] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.159068] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.996795] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:11] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"info","ts":"2024-03-15T06:27:53.836139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:53.836212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:53.836278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:53.836296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"warn","ts":"2024-03-15T06:27:53.898854Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T06:27:54.399787Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T06:27:54.520006Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"af74041eca695613","rtt":"9.002373ms","error":"dial tcp 192.168.39.27:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-15T06:27:54.520124Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"af74041eca695613","rtt":"1.272856ms","error":"dial tcp 192.168.39.27:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-15T06:27:54.900643Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T06:27:55.401303Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-03-15T06:27:55.535971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:55.536093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:55.536149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:55.536199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"warn","ts":"2024-03-15T06:27:55.902508Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T06:27:56.403507Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4513607660419770026,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T06:27:56.893828Z","caller":"etcdserver/v3_server.go:909","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:27:57.235932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:57.235995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:57.236011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:57.236026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.935653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.935693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.935705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.93572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	
	
	==> etcd [adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435] <==
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 06:20:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-15T06:20:49.61568Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:20:49.61574Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:20:49.615908Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:20:49.616111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616182Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616308Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616452Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616527Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616603Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.616661Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:20:49.61669Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616739Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.616805Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617009Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617111Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.617201Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bd5db29ca66a387"}
	{"level":"info","ts":"2024-03-15T06:20:49.620924Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621045Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:20:49.621081Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 06:27:59 up 17 min,  0 users,  load average: 0.21, 0.56, 0.40
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe] <==
	I0315 06:24:45.722064       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:24:55.736721       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:24:55.736767       1 main.go:227] handling current node
	I0315 06:24:55.736779       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:24:55.736785       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:24:55.736905       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0315 06:24:55.736931       1 main.go:250] Node ha-866665-m03 has CIDR [10.244.2.0/24] 
	I0315 06:24:55.736991       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:24:55.737018       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:25:05.752125       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0315 06:25:05.752883       1 main.go:227] handling current node
	I0315 06:25:05.752904       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0315 06:25:05.752913       1 main.go:250] Node ha-866665-m02 has CIDR [10.244.1.0/24] 
	I0315 06:25:05.753452       1 main.go:223] Handling node with IPs: map[192.168.39.184:{}]
	I0315 06:25:05.753523       1 main.go:250] Node ha-866665-m04 has CIDR [10.244.3.0/24] 
	I0315 06:25:22.879843       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0315 06:25:36.892794       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0315 06:25:50.890933       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0315 06:26:04.890380       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0315 06:26:18.888560       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	panic: Reached maximum retries obtaining node list: etcdserver: request timed out
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a] <==
	I0315 06:27:52.928653       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0315 06:27:52.928736       1 main.go:107] hostIP = 192.168.39.78
	podIP = 192.168.39.78
	I0315 06:27:52.928959       1 main.go:116] setting mtu 1500 for CNI 
	I0315 06:27:52.929005       1 main.go:146] kindnetd IP family: "ipv4"
	I0315 06:27:52.929029       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:27:54.447816       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:27:54.449702       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:27:57.519774       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596] <==
	W0315 06:27:50.003349       1 reflector.go:535] storage/cacher.go:/volumeattachments: failed to list *storage.VolumeAttachment: etcdserver: request timed out
	I0315 06:27:50.003365       1 trace.go:236] Trace[731904531]: "Reflector ListAndWatch" name:storage/cacher.go:/volumeattachments (15-Mar-2024 06:27:36.912) (total time: 13090ms):
	Trace[731904531]: ---"Objects listed" error:etcdserver: request timed out 13090ms (06:27:50.003)
	Trace[731904531]: [13.090756124s] [13.090756124s] END
	E0315 06:27:50.003369       1 cacher.go:470] cacher (volumeattachments.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.VolumeAttachment: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003380       1 reflector.go:535] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	I0315 06:27:50.003393       1 trace.go:236] Trace[592093521]: "Reflector ListAndWatch" name:storage/cacher.go:/prioritylevelconfigurations (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[592093521]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[592093521]: [13.09928422s] [13.09928422s] END
	E0315 06:27:50.003397       1 cacher.go:470] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003439       1 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	I0315 06:27:50.003456       1 trace.go:236] Trace[746771974]: "Reflector ListAndWatch" name:storage/cacher.go:/poddisruptionbudgets (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[746771974]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[746771974]: [13.099292505s] [13.099292505s] END
	E0315 06:27:50.003482       1 cacher.go:470] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003501       1 reflector.go:535] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out
	I0315 06:27:50.003534       1 trace.go:236] Trace[1529918640]: "Reflector ListAndWatch" name:storage/cacher.go:/flowschemas (15-Mar-2024 06:27:36.900) (total time: 13103ms):
	Trace[1529918640]: ---"Objects listed" error:etcdserver: request timed out 13103ms (06:27:50.003)
	Trace[1529918640]: [13.10350995s] [13.10350995s] END
	E0315 06:27:50.003539       1 cacher.go:470] cacher (flowschemas.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003551       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	I0315 06:27:50.003567       1 trace.go:236] Trace[1142995160]: "Reflector ListAndWatch" name:storage/cacher.go:/serviceaccounts (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[1142995160]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[1142995160]: [13.099504673s] [13.099504673s] END
	E0315 06:27:50.003590       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c] <==
	I0315 06:22:34.797179       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:22:35.042135       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:22:35.042305       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:22:35.044748       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:22:35.045354       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:22:35.045398       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:22:35.045423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:22:56.022068       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306] <==
	W0315 06:27:45.987702       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0315 06:27:45.987799       1 node_lifecycle_controller.go:971] "Error updating node" err="Put \"https://192.168.39.78:8443/api/v1/nodes/ha-866665/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" node="ha-866665"
	W0315 06:27:45.989045       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0315 06:27:46.491360       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0315 06:27:47.493717       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0315 06:27:47.913355       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "pod-garbage-collector" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0315 06:27:47.913424       1 gc_controller.go:278] "Error while getting node" err="Get \"https://192.168.39.78:8443/api/v1/nodes/ha-866665-m03\": failed to get token for kube-system/pod-garbage-collector: timed out waiting for the condition" node="ha-866665-m03"
	W0315 06:27:49.496629       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0315 06:27:49.496744       1 node_lifecycle_controller.go:713] "Failed while getting a Node to retry updating node health. Probably Node was deleted" node="ha-866665"
	E0315 06:27:49.496774       1 node_lifecycle_controller.go:718] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.78:8443/api/v1/nodes/ha-866665\": failed to get token for kube-system/node-controller: timed out waiting for the condition" node=""
	W0315 06:27:49.498058       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0315 06:27:51.018428       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.78:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W0315 06:27:52.019211       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:53.442061       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.VolumeAttachment: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/volumeattachments?resourceVersion=2448": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:53.442292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/volumeattachments?resourceVersion=2448": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:54.020709       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:54.020850       1 node_lifecycle_controller.go:713] "Failed while getting a Node to retry updating node health. Probably Node was deleted" node="ha-866665-m02"
	E0315 06:27:54.020938       1 node_lifecycle_controller.go:718] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.78:8443/api/v1/nodes/ha-866665-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" node=""
	W0315 06:27:56.537882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PriorityClass: Get "https://192.168.39.78:8443/apis/scheduling.k8s.io/v1/priorityclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:56.537965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: Get "https://192.168.39.78:8443/apis/scheduling.k8s.io/v1/priorityclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:58.616001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.616065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:59.022584       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:59.307469       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2448": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:59.307558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2448": dial tcp 192.168.39.78:8443: connect: connection refused
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	E0315 06:26:01.167720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:01.167669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:01.167846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:04.241012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:04.241169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.456306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.456506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.457153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.457208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:16.532382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:16.532484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:34.959861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:34.959939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:14.897385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:14.897592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:17.967871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:17.968263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:21.039682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:21.039794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:54.832570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:54.832649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0] <==
	E0315 06:19:42.930531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:42.930409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:42.930653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:46.002420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:46.002515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:49.072662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:49.072814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:19:52.148400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:19:52.148636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:01.361557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:01.361669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:04.433846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:04.434046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:16.720661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:16.721007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.937899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:25.937978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:25.938058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:20:44.369019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:20:44.369356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1794": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3] <==
	E0315 06:20:46.187160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.254890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:20:46.255001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:20:46.269076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.269195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:46.317565       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:20:46.317635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:20:46.544563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:20:46.544616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:20:46.741363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 06:20:46.741423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 06:20:46.762451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 06:20:46.762543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 06:20:46.876133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:20:46.876166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:20:47.365393       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:20:47.365501       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:20:47.451958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:20:47.452070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:20:47.587631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:20:47.587662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0315 06:20:49.515923       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:20:49.516074       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:20:49.519966       1 run.go:74] "command failed" err="finished without leader elect"
	I0315 06:20:49.520010       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	E0315 06:27:30.505721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:27:33.101355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 06:27:33.101446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 06:27:33.106864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 06:27:33.106928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 06:27:33.721173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:27:33.721276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:27:34.581491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:34.581544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:36.411769       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:27:36.411881       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:27:36.473470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:27:36.473532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:27:37.175018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:27:37.175090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:27:38.621446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:27:38.621559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:27:39.985765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:39.985857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:41.948412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:41.948471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:58.053579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.053849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:58.945885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.945942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	
	
	==> kubelet <==
	Mar 15 06:27:42 ha-866665 kubelet[1369]: W0315 06:27:42.543990    1369 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2450": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 15 06:27:42 ha-866665 kubelet[1369]: E0315 06:27:42.544044    1369 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2450": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 15 06:27:42 ha-866665 kubelet[1369]: I0315 06:27:42.544113    1369 status_manager.go:853] "Failed to get status for pod" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6" pod="kube-system/kindnet-9nvvx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-9nvvx\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:44 ha-866665 kubelet[1369]: I0315 06:27:44.544051    1369 scope.go:117] "RemoveContainer" containerID="5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	Mar 15 06:27:44 ha-866665 kubelet[1369]: E0315 06:27:44.544495    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:27:45 ha-866665 kubelet[1369]: E0315 06:27:45.615911    1369 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-866665?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Mar 15 06:27:45 ha-866665 kubelet[1369]: I0315 06:27:45.615915    1369 status_manager.go:853] "Failed to get status for pod" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:48 ha-866665 kubelet[1369]: I0315 06:27:48.687803    1369 status_manager.go:853] "Failed to get status for pod" podUID="ec32969267e5d443d53332f70d668161" pod="kube-system/kube-apiserver-ha-866665" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-866665\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:48 ha-866665 kubelet[1369]: W0315 06:27:48.688380    1369 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2448": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 15 06:27:48 ha-866665 kubelet[1369]: E0315 06:27:48.688693    1369 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2448": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 15 06:27:51 ha-866665 kubelet[1369]: I0315 06:27:51.048675    1369 scope.go:117] "RemoveContainer" containerID="a912dc6e7f8063e86e2c32c4ce628fdacf363ffb8d1b0b39b9e71239a7a5d6db"
	Mar 15 06:27:51 ha-866665 kubelet[1369]: I0315 06:27:51.049340    1369 scope.go:117] "RemoveContainer" containerID="e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	Mar 15 06:27:51 ha-866665 kubelet[1369]: E0315 06:27:51.049861    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:27:51 ha-866665 kubelet[1369]: I0315 06:27:51.759765    1369 status_manager.go:853] "Failed to get status for pod" podUID="affdbe5d0709ec0c8cfe4e796df74130" pod="kube-system/kube-vip-ha-866665" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-866665\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:51 ha-866665 kubelet[1369]: E0315 06:27:51.759985    1369 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-vip-ha-866665.17bcdbb77de4ada6", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"1929", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-vip-ha-866665", UID:"affdbe5d0709ec0c8cfe4e796df74130", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-vip}"}, Reason:"BackOff", Message:"Back-off restarting failed container kube-vip in pod kube-vip-ha-866665_kube-system(affdbe5d0709ec0c8cfe4e796df74130)",
Source:v1.EventSource{Component:"kubelet", Host:"ha-866665"}, FirstTimestamp:time.Date(2024, time.March, 15, 6, 18, 59, 0, time.Local), LastTimestamp:time.Date(2024, time.March, 15, 6, 25, 19, 202678896, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ha-866665"}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-vip-ha-866665.17bcdbb77de4ada6": dial tcp 192.168.39.254:8443: connect: no route to host'(may retry after sleeping)
	Mar 15 06:27:52 ha-866665 kubelet[1369]: I0315 06:27:52.055791    1369 scope.go:117] "RemoveContainer" containerID="e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	Mar 15 06:27:52 ha-866665 kubelet[1369]: E0315 06:27:52.056504    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:27:52 ha-866665 kubelet[1369]: I0315 06:27:52.543937    1369 scope.go:117] "RemoveContainer" containerID="a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe"
	Mar 15 06:27:54 ha-866665 kubelet[1369]: E0315 06:27:54.831801    1369 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-866665\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:54 ha-866665 kubelet[1369]: E0315 06:27:54.831803    1369 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-866665?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Mar 15 06:27:54 ha-866665 kubelet[1369]: I0315 06:27:54.831880    1369 status_manager.go:853] "Failed to get status for pod" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6" pod="kube-system/kindnet-9nvvx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-9nvvx\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:57 ha-866665 kubelet[1369]: I0315 06:27:57.544095    1369 scope.go:117] "RemoveContainer" containerID="5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	Mar 15 06:27:57 ha-866665 kubelet[1369]: E0315 06:27:57.544750    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:27:57 ha-866665 kubelet[1369]: E0315 06:27:57.903736    1369 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-866665\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 15 06:27:57 ha-866665 kubelet[1369]: I0315 06:27:57.904341    1369 status_manager.go:853] "Failed to get status for pod" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:27:58.326991   33383 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665: exit status 2 (230.720777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-866665" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (172.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (457.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-866665 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 06:29:21.071379   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:29:58.534958   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:30:44.118342   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:34:21.072233   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:34:58.532493   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-866665 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 80 (7m35.451450338s)

                                                
                                                
-- stdout --
	* [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	* Updating the running kvm2 "ha-866665" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-866665-m02" control-plane node in "ha-866665" cluster
	* Restarting existing kvm2 VM for "ha-866665-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.78
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.78
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:28:00.069231   33437 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:28:00.069368   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069377   33437 out.go:304] Setting ErrFile to fd 2...
	I0315 06:28:00.069382   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069568   33437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:28:00.070093   33437 out.go:298] Setting JSON to false
	I0315 06:28:00.070988   33437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4176,"bootTime":1710479904,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:28:00.071057   33437 start.go:139] virtualization: kvm guest
	I0315 06:28:00.074620   33437 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:28:00.076308   33437 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:28:00.076319   33437 notify.go:220] Checking for updates...
	I0315 06:28:00.079197   33437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:28:00.080588   33437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:28:00.081864   33437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:28:00.083324   33437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:28:00.084651   33437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:28:00.086650   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.087036   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.087091   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.102114   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0315 06:28:00.102558   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.103095   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.103124   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.103438   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.103601   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.103876   33437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:28:00.104159   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.104210   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.119133   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0315 06:28:00.119585   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.120070   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.120090   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.120437   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.120651   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.156291   33437 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:28:00.157886   33437 start.go:297] selected driver: kvm2
	I0315 06:28:00.157902   33437 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.158040   33437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:28:00.158357   33437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.158422   33437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:28:00.174458   33437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:28:00.175133   33437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:28:00.175191   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:28:00.175203   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:28:00.175251   33437 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.175362   33437 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.177468   33437 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:28:00.179008   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:28:00.179040   33437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:28:00.179047   33437 cache.go:56] Caching tarball of preloaded images
	I0315 06:28:00.179131   33437 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:28:00.179142   33437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:28:00.179294   33437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:28:00.179480   33437 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:28:00.179520   33437 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-866665"
	I0315 06:28:00.179534   33437 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:28:00.179545   33437 fix.go:54] fixHost starting: 
	I0315 06:28:00.179780   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.179810   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.194943   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0315 06:28:00.195338   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.195810   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.195828   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.196117   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.196309   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.196495   33437 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:28:00.198137   33437 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:28:00.198153   33437 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:28:00.200161   33437 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:28:00.201473   33437 machine.go:94] provisionDockerMachine start ...
	I0315 06:28:00.201503   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.201694   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.204348   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204777   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.204797   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204937   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.205101   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205264   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205376   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.205519   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.205700   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.205711   33437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:28:00.305507   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.305537   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305774   33437 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:28:00.305803   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305989   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.308802   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309169   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.309190   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309354   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.309553   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.309826   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.310014   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.310190   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.310366   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.310382   33437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:28:00.429403   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.429432   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.432235   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432606   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.432644   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432809   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.432999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433159   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433289   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.433507   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.433711   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.433736   33437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:28:00.533992   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:28:00.534024   33437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:28:00.534042   33437 buildroot.go:174] setting up certificates
	I0315 06:28:00.534050   33437 provision.go:84] configureAuth start
	I0315 06:28:00.534059   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.534324   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:28:00.536932   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537280   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.537309   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537403   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.539778   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540170   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.540188   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540352   33437 provision.go:143] copyHostCerts
	I0315 06:28:00.540374   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540409   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:28:00.540418   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540502   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:28:00.540577   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540595   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:28:00.540602   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540626   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:28:00.540689   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540712   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:28:00.540721   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540757   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:28:00.540858   33437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:28:00.727324   33437 provision.go:177] copyRemoteCerts
	I0315 06:28:00.727392   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:28:00.727415   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.730386   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.730795   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.730817   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.731033   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.731269   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.731448   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.731603   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:28:00.811679   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:28:00.811760   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:28:00.840244   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:28:00.840325   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:28:00.866687   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:28:00.866766   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:28:00.893745   33437 provision.go:87] duration metric: took 359.681699ms to configureAuth
	I0315 06:28:00.893783   33437 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:28:00.894043   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.894134   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.897023   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897388   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.897411   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897569   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.897752   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.897920   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.898052   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.898189   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.898433   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.898471   33437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:29:35.718292   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:29:35.718325   33437 machine.go:97] duration metric: took 1m35.516837024s to provisionDockerMachine
	I0315 06:29:35.718343   33437 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:29:35.718359   33437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:29:35.718374   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.718720   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:29:35.718757   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.722200   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722789   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.722838   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722915   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.723113   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.723278   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.723452   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:35.808948   33437 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:29:35.813922   33437 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:29:35.813958   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:29:35.814035   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:29:35.814150   33437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:29:35.814165   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:29:35.814262   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:29:35.825162   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:29:35.853607   33437 start.go:296] duration metric: took 135.248885ms for postStartSetup
	I0315 06:29:35.853656   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.853968   33437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:29:35.853999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.857046   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857515   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.857538   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857740   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.857904   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.858174   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.858327   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:29:35.939552   33437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:29:35.939581   33437 fix.go:56] duration metric: took 1m35.76003955s for fixHost
	I0315 06:29:35.939603   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.942284   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942621   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.942656   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942842   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.943040   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943209   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943341   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.943527   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:29:35.943686   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:29:35.943696   33437 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 06:29:36.045713   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710484176.008370872
	
	I0315 06:29:36.045741   33437 fix.go:216] guest clock: 1710484176.008370872
	I0315 06:29:36.045749   33437 fix.go:229] Guest: 2024-03-15 06:29:36.008370872 +0000 UTC Remote: 2024-03-15 06:29:35.939588087 +0000 UTC m=+95.917046644 (delta=68.782785ms)
	I0315 06:29:36.045784   33437 fix.go:200] guest clock delta is within tolerance: 68.782785ms
	I0315 06:29:36.045790   33437 start.go:83] releasing machines lock for "ha-866665", held for 1m35.866260772s
	I0315 06:29:36.045808   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.046095   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:29:36.048748   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049090   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.049125   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049284   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.049937   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050138   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050191   33437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:29:36.050244   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.050342   33437 ssh_runner.go:195] Run: cat /version.json
	I0315 06:29:36.050361   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.053057   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053439   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053473   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053529   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053632   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.053795   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.053957   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.053958   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053983   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.054117   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.054145   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.054330   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.054470   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.054647   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.174816   33437 ssh_runner.go:195] Run: systemctl --version
	I0315 06:29:36.181715   33437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:29:36.358096   33437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:29:36.367581   33437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:29:36.367659   33437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:29:36.383454   33437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:29:36.383480   33437 start.go:494] detecting cgroup driver to use...
	I0315 06:29:36.383550   33437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:29:36.407514   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:29:36.425757   33437 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:29:36.425807   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:29:36.448161   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:29:36.466873   33437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:29:36.634934   33437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:29:36.809139   33437 docker.go:233] disabling docker service ...
	I0315 06:29:36.809211   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:29:36.831715   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:29:36.847966   33437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:29:37.006211   33437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:29:37.162186   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:29:37.178537   33437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:29:37.200300   33437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:29:37.200368   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.212398   33437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:29:37.212455   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.223908   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.235824   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.247520   33437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:29:37.259008   33437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:29:37.269062   33437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:29:37.281152   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:29:37.434941   33437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:31:11.689384   33437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.254391387s)
	I0315 06:31:11.689430   33437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:31:11.689496   33437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:31:11.698102   33437 start.go:562] Will wait 60s for crictl version
	I0315 06:31:11.698154   33437 ssh_runner.go:195] Run: which crictl
	I0315 06:31:11.702605   33437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:31:11.746302   33437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:31:11.746373   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.777004   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.813410   33437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:31:11.815123   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:31:11.818257   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818696   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:31:11.818717   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818982   33437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:31:11.824253   33437 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:31:11.824379   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:31:11.824419   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.877400   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.877423   33437 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:31:11.877466   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.913358   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.913383   33437 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:31:11.913393   33437 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:31:11.913524   33437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:31:11.913604   33437 ssh_runner.go:195] Run: crio config
	I0315 06:31:11.961648   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:31:11.961666   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:31:11.961674   33437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:31:11.961692   33437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:31:11.961854   33437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:31:11.961877   33437 kube-vip.go:111] generating kube-vip config ...
	I0315 06:31:11.961925   33437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:31:11.974783   33437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:31:11.974879   33437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:31:11.974928   33437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:31:11.985708   33437 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:31:11.985779   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:31:11.996849   33437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:31:12.015133   33437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:31:12.032728   33437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:31:12.050473   33437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:31:12.071279   33437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:31:12.075748   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:31:12.239653   33437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:31:12.288563   33437 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:31:12.288592   33437 certs.go:194] generating shared ca certs ...
	I0315 06:31:12.288612   33437 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.288830   33437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:31:12.288895   33437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:31:12.288911   33437 certs.go:256] generating profile certs ...
	I0315 06:31:12.289016   33437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:31:12.289054   33437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211
	I0315 06:31:12.289075   33437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:31:12.459406   33437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 ...
	I0315 06:31:12.459436   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211: {Name:mkce2140c17c76a43eac310ec6de314aee20f623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459623   33437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 ...
	I0315 06:31:12.459635   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211: {Name:mk03c3e33d6e2b84dc52dfa74e4afefa164f8f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459705   33437 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:31:12.459841   33437 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:31:12.459958   33437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:31:12.459972   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:31:12.459985   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:31:12.459995   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:31:12.460005   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:31:12.460016   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:31:12.460025   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:31:12.460039   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:31:12.460056   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:31:12.460105   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:31:12.460142   33437 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:31:12.460148   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:31:12.460168   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:31:12.460186   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:31:12.460202   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:31:12.460236   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:31:12.460264   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:12.460276   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:31:12.460287   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:31:12.460824   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:31:12.560429   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:31:12.636236   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:31:12.785307   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:31:12.994552   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 06:31:13.094315   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:31:13.148757   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:31:13.186255   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:31:13.219468   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:31:13.298432   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:31:13.356395   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:31:13.388275   33437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:31:13.416436   33437 ssh_runner.go:195] Run: openssl version
	I0315 06:31:13.425310   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:31:13.441059   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446108   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446176   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.452449   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:31:13.463777   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:31:13.475540   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480658   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480725   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.490764   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:31:13.507483   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:31:13.522259   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527387   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527450   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.535340   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:31:13.547679   33437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:31:13.555390   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:31:13.561596   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:31:13.569097   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:31:13.575180   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:31:13.581211   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:31:13.589108   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:31:13.597196   33437 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:31:13.597364   33437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:31:13.597432   33437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:31:13.654360   33437 cri.go:89] found id: "a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703"
	I0315 06:31:13.654388   33437 cri.go:89] found id: "f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a"
	I0315 06:31:13.654393   33437 cri.go:89] found id: "4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2"
	I0315 06:31:13.654401   33437 cri.go:89] found id: "53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869"
	I0315 06:31:13.654406   33437 cri.go:89] found id: "bdde9d3309aa653bdde9bc5fb009352128cc082c6210723aabf3090316773af4"
	I0315 06:31:13.654411   33437 cri.go:89] found id: "e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a"
	I0315 06:31:13.654414   33437 cri.go:89] found id: "e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	I0315 06:31:13.654418   33437 cri.go:89] found id: "cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7"
	I0315 06:31:13.654422   33437 cri.go:89] found id: "5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	I0315 06:31:13.654429   33437 cri.go:89] found id: "a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe"
	I0315 06:31:13.654434   33437 cri.go:89] found id: "e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306"
	I0315 06:31:13.654438   33437 cri.go:89] found id: "c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e"
	I0315 06:31:13.654447   33437 cri.go:89] found id: "950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088"
	I0315 06:31:13.654451   33437 cri.go:89] found id: "a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2"
	I0315 06:31:13.654473   33437 cri.go:89] found id: "20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a"
	I0315 06:31:13.654481   33437 cri.go:89] found id: "002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c"
	I0315 06:31:13.654485   33437 cri.go:89] found id: "f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c"
	I0315 06:31:13.654494   33437 cri.go:89] found id: "8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de"
	I0315 06:31:13.654501   33437 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:31:13.654506   33437 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:31:13.654513   33437 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:31:13.654517   33437 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:31:13.654524   33437 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:31:13.654528   33437 cri.go:89] found id: ""
	I0315 06:31:13.654578   33437 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-866665 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665: exit status 2 (246.850918ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.395800107s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	| node    | ha-866665 node delete m03 -v=7                                                   | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC | 15 Mar 24 06:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-866665 stop -v=7                                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true                                                         | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:28 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:28:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:28:00.069231   33437 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:28:00.069368   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069377   33437 out.go:304] Setting ErrFile to fd 2...
	I0315 06:28:00.069382   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069568   33437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:28:00.070093   33437 out.go:298] Setting JSON to false
	I0315 06:28:00.070988   33437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4176,"bootTime":1710479904,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:28:00.071057   33437 start.go:139] virtualization: kvm guest
	I0315 06:28:00.074620   33437 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:28:00.076308   33437 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:28:00.076319   33437 notify.go:220] Checking for updates...
	I0315 06:28:00.079197   33437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:28:00.080588   33437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:28:00.081864   33437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:28:00.083324   33437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:28:00.084651   33437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:28:00.086650   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.087036   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.087091   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.102114   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0315 06:28:00.102558   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.103095   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.103124   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.103438   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.103601   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.103876   33437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:28:00.104159   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.104210   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.119133   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0315 06:28:00.119585   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.120070   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.120090   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.120437   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.120651   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.156291   33437 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:28:00.157886   33437 start.go:297] selected driver: kvm2
	I0315 06:28:00.157902   33437 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.158040   33437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:28:00.158357   33437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.158422   33437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:28:00.174458   33437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:28:00.175133   33437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:28:00.175191   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:28:00.175203   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:28:00.175251   33437 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.175362   33437 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.177468   33437 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:28:00.179008   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:28:00.179040   33437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:28:00.179047   33437 cache.go:56] Caching tarball of preloaded images
	I0315 06:28:00.179131   33437 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:28:00.179142   33437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:28:00.179294   33437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:28:00.179480   33437 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:28:00.179520   33437 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-866665"
	I0315 06:28:00.179534   33437 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:28:00.179545   33437 fix.go:54] fixHost starting: 
	I0315 06:28:00.179780   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.179810   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.194943   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0315 06:28:00.195338   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.195810   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.195828   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.196117   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.196309   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.196495   33437 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:28:00.198137   33437 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:28:00.198153   33437 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:28:00.200161   33437 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:28:00.201473   33437 machine.go:94] provisionDockerMachine start ...
	I0315 06:28:00.201503   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.201694   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.204348   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204777   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.204797   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204937   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.205101   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205264   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205376   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.205519   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.205700   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.205711   33437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:28:00.305507   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.305537   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305774   33437 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:28:00.305803   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305989   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.308802   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309169   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.309190   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309354   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.309553   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.309826   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.310014   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.310190   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.310366   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.310382   33437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:28:00.429403   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.429432   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.432235   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432606   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.432644   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432809   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.432999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433159   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433289   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.433507   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.433711   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.433736   33437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:28:00.533992   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:28:00.534024   33437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:28:00.534042   33437 buildroot.go:174] setting up certificates
	I0315 06:28:00.534050   33437 provision.go:84] configureAuth start
	I0315 06:28:00.534059   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.534324   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:28:00.536932   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537280   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.537309   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537403   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.539778   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540170   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.540188   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540352   33437 provision.go:143] copyHostCerts
	I0315 06:28:00.540374   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540409   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:28:00.540418   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540502   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:28:00.540577   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540595   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:28:00.540602   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540626   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:28:00.540689   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540712   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:28:00.540721   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540757   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:28:00.540858   33437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:28:00.727324   33437 provision.go:177] copyRemoteCerts
	I0315 06:28:00.727392   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:28:00.727415   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.730386   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.730795   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.730817   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.731033   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.731269   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.731448   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.731603   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:28:00.811679   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:28:00.811760   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:28:00.840244   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:28:00.840325   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:28:00.866687   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:28:00.866766   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:28:00.893745   33437 provision.go:87] duration metric: took 359.681699ms to configureAuth
	I0315 06:28:00.893783   33437 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:28:00.894043   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.894134   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.897023   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897388   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.897411   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897569   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.897752   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.897920   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.898052   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.898189   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.898433   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.898471   33437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:29:35.718292   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:29:35.718325   33437 machine.go:97] duration metric: took 1m35.516837024s to provisionDockerMachine
	I0315 06:29:35.718343   33437 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:29:35.718359   33437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:29:35.718374   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.718720   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:29:35.718757   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.722200   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722789   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.722838   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722915   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.723113   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.723278   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.723452   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:35.808948   33437 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:29:35.813922   33437 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:29:35.813958   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:29:35.814035   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:29:35.814150   33437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:29:35.814165   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:29:35.814262   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:29:35.825162   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:29:35.853607   33437 start.go:296] duration metric: took 135.248885ms for postStartSetup
	I0315 06:29:35.853656   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.853968   33437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:29:35.853999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.857046   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857515   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.857538   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857740   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.857904   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.858174   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.858327   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:29:35.939552   33437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:29:35.939581   33437 fix.go:56] duration metric: took 1m35.76003955s for fixHost
	I0315 06:29:35.939603   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.942284   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942621   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.942656   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942842   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.943040   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943209   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943341   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.943527   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:29:35.943686   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:29:35.943696   33437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:29:36.045713   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710484176.008370872
	
	I0315 06:29:36.045741   33437 fix.go:216] guest clock: 1710484176.008370872
	I0315 06:29:36.045749   33437 fix.go:229] Guest: 2024-03-15 06:29:36.008370872 +0000 UTC Remote: 2024-03-15 06:29:35.939588087 +0000 UTC m=+95.917046644 (delta=68.782785ms)
	I0315 06:29:36.045784   33437 fix.go:200] guest clock delta is within tolerance: 68.782785ms
	I0315 06:29:36.045790   33437 start.go:83] releasing machines lock for "ha-866665", held for 1m35.866260772s
	I0315 06:29:36.045808   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.046095   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:29:36.048748   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049090   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.049125   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049284   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.049937   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050138   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050191   33437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:29:36.050244   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.050342   33437 ssh_runner.go:195] Run: cat /version.json
	I0315 06:29:36.050361   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.053057   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053439   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053473   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053529   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053632   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.053795   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.053957   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.053958   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053983   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.054117   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.054145   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.054330   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.054470   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.054647   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.174816   33437 ssh_runner.go:195] Run: systemctl --version
	I0315 06:29:36.181715   33437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:29:36.358096   33437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:29:36.367581   33437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:29:36.367659   33437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:29:36.383454   33437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:29:36.383480   33437 start.go:494] detecting cgroup driver to use...
	I0315 06:29:36.383550   33437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:29:36.407514   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:29:36.425757   33437 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:29:36.425807   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:29:36.448161   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:29:36.466873   33437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:29:36.634934   33437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:29:36.809139   33437 docker.go:233] disabling docker service ...
	I0315 06:29:36.809211   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:29:36.831715   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:29:36.847966   33437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:29:37.006211   33437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:29:37.162186   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:29:37.178537   33437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:29:37.200300   33437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:29:37.200368   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.212398   33437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:29:37.212455   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.223908   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.235824   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.247520   33437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:29:37.259008   33437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:29:37.269062   33437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:29:37.281152   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:29:37.434941   33437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:31:11.689384   33437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.254391387s)
	I0315 06:31:11.689430   33437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:31:11.689496   33437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:31:11.698102   33437 start.go:562] Will wait 60s for crictl version
	I0315 06:31:11.698154   33437 ssh_runner.go:195] Run: which crictl
	I0315 06:31:11.702605   33437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:31:11.746302   33437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:31:11.746373   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.777004   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.813410   33437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:31:11.815123   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:31:11.818257   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818696   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:31:11.818717   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818982   33437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:31:11.824253   33437 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:31:11.824379   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:31:11.824419   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.877400   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.877423   33437 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:31:11.877466   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.913358   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.913383   33437 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:31:11.913393   33437 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:31:11.913524   33437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:31:11.913604   33437 ssh_runner.go:195] Run: crio config
	I0315 06:31:11.961648   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:31:11.961666   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:31:11.961674   33437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:31:11.961692   33437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:31:11.961854   33437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:31:11.961877   33437 kube-vip.go:111] generating kube-vip config ...
	I0315 06:31:11.961925   33437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:31:11.974783   33437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:31:11.974879   33437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:31:11.974928   33437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:31:11.985708   33437 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:31:11.985779   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:31:11.996849   33437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:31:12.015133   33437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:31:12.032728   33437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:31:12.050473   33437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:31:12.071279   33437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:31:12.075748   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:31:12.239653   33437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:31:12.288563   33437 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:31:12.288592   33437 certs.go:194] generating shared ca certs ...
	I0315 06:31:12.288612   33437 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.288830   33437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:31:12.288895   33437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:31:12.288911   33437 certs.go:256] generating profile certs ...
	I0315 06:31:12.289016   33437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:31:12.289054   33437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211
	I0315 06:31:12.289075   33437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:31:12.459406   33437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 ...
	I0315 06:31:12.459436   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211: {Name:mkce2140c17c76a43eac310ec6de314aee20f623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459623   33437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 ...
	I0315 06:31:12.459635   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211: {Name:mk03c3e33d6e2b84dc52dfa74e4afefa164f8f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459705   33437 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:31:12.459841   33437 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:31:12.459958   33437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:31:12.459972   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:31:12.459985   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:31:12.459995   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:31:12.460005   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:31:12.460016   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:31:12.460025   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:31:12.460039   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:31:12.460056   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:31:12.460105   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:31:12.460142   33437 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:31:12.460148   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:31:12.460168   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:31:12.460186   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:31:12.460202   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:31:12.460236   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:31:12.460264   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:12.460276   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:31:12.460287   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:31:12.460824   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:31:12.560429   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:31:12.636236   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:31:12.785307   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:31:12.994552   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 06:31:13.094315   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:31:13.148757   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:31:13.186255   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:31:13.219468   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:31:13.298432   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:31:13.356395   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:31:13.388275   33437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:31:13.416436   33437 ssh_runner.go:195] Run: openssl version
	I0315 06:31:13.425310   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:31:13.441059   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446108   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446176   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.452449   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:31:13.463777   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:31:13.475540   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480658   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480725   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.490764   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:31:13.507483   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:31:13.522259   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527387   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527450   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.535340   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:31:13.547679   33437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:31:13.555390   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:31:13.561596   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:31:13.569097   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:31:13.575180   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:31:13.581211   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:31:13.589108   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:31:13.597196   33437 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:31:13.597364   33437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:31:13.597432   33437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:31:13.654360   33437 cri.go:89] found id: "a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703"
	I0315 06:31:13.654388   33437 cri.go:89] found id: "f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a"
	I0315 06:31:13.654393   33437 cri.go:89] found id: "4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2"
	I0315 06:31:13.654401   33437 cri.go:89] found id: "53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869"
	I0315 06:31:13.654406   33437 cri.go:89] found id: "bdde9d3309aa653bdde9bc5fb009352128cc082c6210723aabf3090316773af4"
	I0315 06:31:13.654411   33437 cri.go:89] found id: "e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a"
	I0315 06:31:13.654414   33437 cri.go:89] found id: "e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	I0315 06:31:13.654418   33437 cri.go:89] found id: "cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7"
	I0315 06:31:13.654422   33437 cri.go:89] found id: "5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	I0315 06:31:13.654429   33437 cri.go:89] found id: "a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe"
	I0315 06:31:13.654434   33437 cri.go:89] found id: "e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306"
	I0315 06:31:13.654438   33437 cri.go:89] found id: "c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e"
	I0315 06:31:13.654447   33437 cri.go:89] found id: "950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088"
	I0315 06:31:13.654451   33437 cri.go:89] found id: "a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2"
	I0315 06:31:13.654473   33437 cri.go:89] found id: "20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a"
	I0315 06:31:13.654481   33437 cri.go:89] found id: "002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c"
	I0315 06:31:13.654485   33437 cri.go:89] found id: "f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c"
	I0315 06:31:13.654494   33437 cri.go:89] found id: "8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de"
	I0315 06:31:13.654501   33437 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:31:13.654506   33437 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:31:13.654513   33437 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:31:13.654517   33437 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:31:13.654524   33437 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:31:13.654528   33437 cri.go:89] found id: ""
	I0315 06:31:13.654578   33437 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.084152224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574d9e3d-a7d0-4a10-b976-b9d6890a26b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.084652012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=574d9e3d-a7d0-4a10-b976-b9d6890a26b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.127100279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcca7cac-5c0c-47e9-b9b3-107b1cade980 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.127204836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcca7cac-5c0c-47e9-b9b3-107b1cade980 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.128723299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=414562c6-82d8-4da0-9324-69d9e05634b5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.129189093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484536129164327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=414562c6-82d8-4da0-9324-69d9e05634b5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.129840970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a94d7192-00d8-4ea1-b290-c2a1f584f613 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.129924727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a94d7192-00d8-4ea1-b290-c2a1f584f613 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.130428272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a94d7192-00d8-4ea1-b290-c2a1f584f613 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.137737420Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=98c235c1-662f-41e3-a615-0728df0600bf name=/runtime.v1.RuntimeService/Status
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.137803091Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=98c235c1-662f-41e3-a615-0728df0600bf name=/runtime.v1.RuntimeService/Status
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.175480586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f5dcffe-f79b-46a6-9a8a-6c3ea36d1501 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.175556274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f5dcffe-f79b-46a6-9a8a-6c3ea36d1501 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.176927092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c069b5d6-2c54-42e3-86c0-988bdb5f7fd6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.177465486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484536177440470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c069b5d6-2c54-42e3-86c0-988bdb5f7fd6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.177978333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5b5a2d5-6430-4763-945a-6c43ac7f246f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.178040036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5b5a2d5-6430-4763-945a-6c43ac7f246f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.178465477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5b5a2d5-6430-4763-945a-6c43ac7f246f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.218937933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79c7e966-b09c-4e4d-bea8-07345c364ea3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.219033656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79c7e966-b09c-4e4d-bea8-07345c364ea3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.220167194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5782ed62-92e0-4c2a-b8e3-2b005828fc3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.220683568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484536220661107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5782ed62-92e0-4c2a-b8e3-2b005828fc3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.221213497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b3b828f-fdfd-4d6c-9be8-67ed6d2e588c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.221342787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b3b828f-fdfd-4d6c-9be8-67ed6d2e588c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:36 ha-866665 crio[6969]: time="2024-03-15 06:35:36.222404115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b3b828f-fdfd-4d6c-9be8-67ed6d2e588c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f4c9a0644c94       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   About a minute ago   Exited              kube-controller-manager   5                   16c817ba7d264       kube-controller-manager-ha-866665
	8568bece5a49f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   3 minutes ago        Running             busybox                   2                   1a1fdba5ec224       busybox-5b5d89c9d6-82knb
	10518fb395cce       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   4 minutes ago        Exited              kindnet-cni               5                   d7a69a3a337af       kindnet-9nvvx
	fd937a8a91dec       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   4 minutes ago        Running             kube-proxy                2                   9f2fb2d671096       kube-proxy-sbxgg
	f31b9e9704e22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 minutes ago        Exited              storage-provisioner       6                   a0a76006f9e8a       storage-provisioner
	959c3adf756ac       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   4 minutes ago        Running             etcd                      2                   22072600a839b       etcd-ha-866665
	a3b8244e29a11       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 minutes ago        Running             coredns                   2                   cf57c2ff9f3b2       coredns-5dd5756b68-r57px
	f8f276ed61ae4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 minutes ago        Running             coredns                   2                   14af170de4b57       coredns-5dd5756b68-mgthb
	4632db4347aeb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   4 minutes ago        Running             kube-vip                  5                   09c39329a8da2       kube-vip-ha-866665
	53858589abe09       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   4 minutes ago        Running             kube-scheduler            2                   7f2d26260bc13       kube-scheduler-ha-866665
	e4370ef8479c8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   8 minutes ago        Exited              kube-apiserver            4                   f7b655acbd708       kube-apiserver-ha-866665
	cb4635f3b41c2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   9 minutes ago        Exited              kube-vip                  4                   b3fef0e73d7bb       kube-vip-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   12 minutes ago       Exited              busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago       Exited              coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago       Exited              kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago       Exited              etcd                      1                   79337bac30908       etcd-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago       Exited              kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago       Exited              coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	Trace[800224354]: ---"Objects listed" error:Unauthorized 12252ms (06:27:21.915)
	Trace[800224354]: [12.252932189s] [12.252932189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[532336764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:10.595) (total time: 11322ms):
	Trace[532336764]: ---"Objects listed" error:Unauthorized 11321ms (06:27:21.916)
	Trace[532336764]: [11.322096854s] [11.322096854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1149679676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:25.734) (total time: 10173ms):
	Trace[1149679676]: ---"Objects listed" error:Unauthorized 10171ms (06:27:35.906)
	Trace[1149679676]: [10.173827374s] [10.173827374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40115 - 43756 "HINFO IN 2717951387138798829.7180821467390164679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010608616s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[652911396]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.712) (total time: 11205ms):
	Trace[652911396]: ---"Objects listed" error:Unauthorized 11205ms (06:27:35.918)
	Trace[652911396]: [11.205453768s] [11.205453768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[659281961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.824) (total time: 11093ms):
	Trace[659281961]: ---"Objects listed" error:Unauthorized 11093ms (06:27:35.918)
	Trace[659281961]: [11.093964434s] [11.093964434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47355 - 18813 "HINFO IN 6697230244971142980.3977120107732033871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009991661s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	[Mar15 06:29] systemd-fstab-generator[6878]: Ignoring "noauto" option for root device
	[  +0.182963] systemd-fstab-generator[6890]: Ignoring "noauto" option for root device
	[  +0.207318] systemd-fstab-generator[6904]: Ignoring "noauto" option for root device
	[  +0.160914] systemd-fstab-generator[6916]: Ignoring "noauto" option for root device
	[  +0.264473] systemd-fstab-generator[6940]: Ignoring "noauto" option for root device
	[Mar15 06:31] systemd-fstab-generator[7068]: Ignoring "noauto" option for root device
	[  +0.099499] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.728671] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.581573] kauditd_printk_skb: 34 callbacks suppressed
	[Mar15 06:32] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"info","ts":"2024-03-15T06:27:58.935705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.93572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"warn","ts":"2024-03-15T06:27:59.520795Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"af74041eca695613","rtt":"9.002373ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:27:59.521075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"af74041eca695613","rtt":"1.272856ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"info","ts":"2024-03-15T06:28:00.636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.991925Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T06:28:00.992032Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-03-15T06:28:00.992202Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.992285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994802Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:28:00.994881Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:28:00.99506Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995086Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995114Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995281Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995331Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995423Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995436Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:01.013373Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.013771Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.01388Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> etcd [959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8] <==
	{"level":"info","ts":"2024-03-15T06:31:59.650477Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:31:59.664413Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"af74041eca695613","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T06:31:59.664504Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:31:59.6685Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"af74041eca695613","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T06:31:59.668598Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:32:00.258901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.260102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.260177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-15T06:32:00.260314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.260345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.260375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgVote request to af74041eca695613 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from af74041eca695613 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-15T06:32:00.266182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.273166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:ha-866665 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:32:00.273534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:32:00.275869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:32:00.275964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:32:00.277324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-03-15T06:32:00.277644Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:32:00.277708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 06:35:36 up 25 min,  0 users,  load average: 0.10, 0.19, 0.27
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54] <==
	I0315 06:31:18.825608       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:31:19.125113       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:19.440688       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:22.512472       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:24.514025       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:27.514923       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596] <==
	W0315 06:27:50.003349       1 reflector.go:535] storage/cacher.go:/volumeattachments: failed to list *storage.VolumeAttachment: etcdserver: request timed out
	I0315 06:27:50.003365       1 trace.go:236] Trace[731904531]: "Reflector ListAndWatch" name:storage/cacher.go:/volumeattachments (15-Mar-2024 06:27:36.912) (total time: 13090ms):
	Trace[731904531]: ---"Objects listed" error:etcdserver: request timed out 13090ms (06:27:50.003)
	Trace[731904531]: [13.090756124s] [13.090756124s] END
	E0315 06:27:50.003369       1 cacher.go:470] cacher (volumeattachments.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.VolumeAttachment: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003380       1 reflector.go:535] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	I0315 06:27:50.003393       1 trace.go:236] Trace[592093521]: "Reflector ListAndWatch" name:storage/cacher.go:/prioritylevelconfigurations (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[592093521]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[592093521]: [13.09928422s] [13.09928422s] END
	E0315 06:27:50.003397       1 cacher.go:470] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003439       1 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	I0315 06:27:50.003456       1 trace.go:236] Trace[746771974]: "Reflector ListAndWatch" name:storage/cacher.go:/poddisruptionbudgets (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[746771974]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[746771974]: [13.099292505s] [13.099292505s] END
	E0315 06:27:50.003482       1 cacher.go:470] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003501       1 reflector.go:535] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out
	I0315 06:27:50.003534       1 trace.go:236] Trace[1529918640]: "Reflector ListAndWatch" name:storage/cacher.go:/flowschemas (15-Mar-2024 06:27:36.900) (total time: 13103ms):
	Trace[1529918640]: ---"Objects listed" error:etcdserver: request timed out 13103ms (06:27:50.003)
	Trace[1529918640]: [13.10350995s] [13.10350995s] END
	E0315 06:27:50.003539       1 cacher.go:470] cacher (flowschemas.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003551       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	I0315 06:27:50.003567       1 trace.go:236] Trace[1142995160]: "Reflector ListAndWatch" name:storage/cacher.go:/serviceaccounts (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[1142995160]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[1142995160]: [13.099504673s] [13.099504673s] END
	E0315 06:27:50.003590       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd] <==
	I0315 06:34:01.115770       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:34:02.015947       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:34:02.015993       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:34:02.025496       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:34:02.025650       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:34:02.026071       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:34:02.026580       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:34:12.027833       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	E0315 06:26:01.167720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:01.167669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:01.167846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:04.241012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:04.241169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.456306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.456506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.457153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.457208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:16.532382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:16.532484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:34.959861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:34.959939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:14.897385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:14.897592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:17.967871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:17.968263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:21.039682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:21.039794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:54.832570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:54.832649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5] <==
	E0315 06:31:42.162745       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:03.665169       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:03.665837       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0315 06:32:03.704165       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:32:03.704332       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:32:03.707769       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:32:03.707920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:32:03.709020       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:32:03.709101       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:32:03.711683       1 config.go:188] "Starting service config controller"
	I0315 06:32:03.711754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:32:03.711789       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:32:03.711825       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:32:03.712712       1 config.go:315] "Starting node config controller"
	I0315 06:32:03.712754       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0315 06:32:06.736537       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0315 06:32:06.736941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:07.612784       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:32:08.012336       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:32:08.013310       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869] <==
	E0315 06:35:01.851300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.78:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:02.252869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:02.252929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:02.432962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:02.433064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:04.291643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:04.291746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:08.964028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:08.964097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:14.007535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:14.007679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:16.430156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:16.430457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:18.622486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:18.622559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:23.506725       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:23.506825       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:23.886417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:23.886503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:28.295205       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:28.295388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:31.864716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:31.864806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:35.412598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:35.412708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	W0315 06:27:33.721173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:27:33.721276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:27:34.581491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:34.581544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:36.411769       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:27:36.411881       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:27:36.473470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:27:36.473532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:27:37.175018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:27:37.175090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:27:38.621446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:27:38.621559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:27:39.985765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:39.985857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:41.948412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:41.948471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:58.053579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.053849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:58.945885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.945942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:28:00.884506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:28:00.884572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	I0315 06:28:00.994442       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:28:00.994493       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:28:00.994655       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 06:35:05 ha-866665 kubelet[1369]: E0315 06:35:05.567930    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:35:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:35:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:35:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:35:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: I0315 06:35:10.545742    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.546498    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.549941    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.549982    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.550024    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.550095    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:35:13 ha-866665 kubelet[1369]: I0315 06:35:13.544853    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:35:13 ha-866665 kubelet[1369]: E0315 06:35:13.545764    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:35:15 ha-866665 kubelet[1369]: I0315 06:35:15.544672    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:35:15 ha-866665 kubelet[1369]: E0315 06:35:15.545381    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: I0315 06:35:24.544088    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: E0315 06:35:24.544466    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: I0315 06:35:24.544560    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: E0315 06:35:24.544801    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.550967    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551026    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551048    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551114    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:35:29 ha-866665 kubelet[1369]: I0315 06:35:29.544575    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:35:29 ha-866665 kubelet[1369]: E0315 06:35:29.544964    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:35:35.814380   34780 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665: exit status 2 (229.382075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-866665" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (457.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-866665" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-866665\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-866665\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-866665\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.78\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.27\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.184\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":
false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMe
trics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665: exit status 2 (232.68542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.417671079s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	| node    | ha-866665 node delete m03 -v=7                                                   | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC | 15 Mar 24 06:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-866665 stop -v=7                                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true                                                         | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:28 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:28:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:28:00.069231   33437 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:28:00.069368   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069377   33437 out.go:304] Setting ErrFile to fd 2...
	I0315 06:28:00.069382   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069568   33437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:28:00.070093   33437 out.go:298] Setting JSON to false
	I0315 06:28:00.070988   33437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4176,"bootTime":1710479904,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:28:00.071057   33437 start.go:139] virtualization: kvm guest
	I0315 06:28:00.074620   33437 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:28:00.076308   33437 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:28:00.076319   33437 notify.go:220] Checking for updates...
	I0315 06:28:00.079197   33437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:28:00.080588   33437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:28:00.081864   33437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:28:00.083324   33437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:28:00.084651   33437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:28:00.086650   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.087036   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.087091   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.102114   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0315 06:28:00.102558   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.103095   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.103124   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.103438   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.103601   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.103876   33437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:28:00.104159   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.104210   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.119133   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0315 06:28:00.119585   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.120070   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.120090   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.120437   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.120651   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.156291   33437 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:28:00.157886   33437 start.go:297] selected driver: kvm2
	I0315 06:28:00.157902   33437 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.158040   33437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:28:00.158357   33437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.158422   33437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:28:00.174458   33437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:28:00.175133   33437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:28:00.175191   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:28:00.175203   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:28:00.175251   33437 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.175362   33437 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.177468   33437 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:28:00.179008   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:28:00.179040   33437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:28:00.179047   33437 cache.go:56] Caching tarball of preloaded images
	I0315 06:28:00.179131   33437 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:28:00.179142   33437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:28:00.179294   33437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:28:00.179480   33437 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:28:00.179520   33437 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-866665"
	I0315 06:28:00.179534   33437 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:28:00.179545   33437 fix.go:54] fixHost starting: 
	I0315 06:28:00.179780   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.179810   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.194943   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0315 06:28:00.195338   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.195810   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.195828   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.196117   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.196309   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.196495   33437 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:28:00.198137   33437 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:28:00.198153   33437 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:28:00.200161   33437 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:28:00.201473   33437 machine.go:94] provisionDockerMachine start ...
	I0315 06:28:00.201503   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.201694   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.204348   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204777   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.204797   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204937   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.205101   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205264   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205376   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.205519   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.205700   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.205711   33437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:28:00.305507   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.305537   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305774   33437 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:28:00.305803   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305989   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.308802   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309169   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.309190   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309354   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.309553   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.309826   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.310014   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.310190   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.310366   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.310382   33437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:28:00.429403   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.429432   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.432235   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432606   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.432644   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432809   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.432999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433159   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433289   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.433507   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.433711   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.433736   33437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:28:00.533992   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:28:00.534024   33437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:28:00.534042   33437 buildroot.go:174] setting up certificates
	I0315 06:28:00.534050   33437 provision.go:84] configureAuth start
	I0315 06:28:00.534059   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.534324   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:28:00.536932   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537280   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.537309   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537403   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.539778   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540170   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.540188   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540352   33437 provision.go:143] copyHostCerts
	I0315 06:28:00.540374   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540409   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:28:00.540418   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540502   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:28:00.540577   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540595   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:28:00.540602   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540626   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:28:00.540689   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540712   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:28:00.540721   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540757   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:28:00.540858   33437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:28:00.727324   33437 provision.go:177] copyRemoteCerts
	I0315 06:28:00.727392   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:28:00.727415   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.730386   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.730795   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.730817   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.731033   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.731269   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.731448   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.731603   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:28:00.811679   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:28:00.811760   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:28:00.840244   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:28:00.840325   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:28:00.866687   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:28:00.866766   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:28:00.893745   33437 provision.go:87] duration metric: took 359.681699ms to configureAuth
	I0315 06:28:00.893783   33437 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:28:00.894043   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.894134   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.897023   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897388   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.897411   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897569   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.897752   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.897920   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.898052   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.898189   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.898433   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.898471   33437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:29:35.718292   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:29:35.718325   33437 machine.go:97] duration metric: took 1m35.516837024s to provisionDockerMachine
	I0315 06:29:35.718343   33437 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:29:35.718359   33437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:29:35.718374   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.718720   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:29:35.718757   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.722200   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722789   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.722838   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722915   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.723113   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.723278   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.723452   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:35.808948   33437 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:29:35.813922   33437 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:29:35.813958   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:29:35.814035   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:29:35.814150   33437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:29:35.814165   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:29:35.814262   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:29:35.825162   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:29:35.853607   33437 start.go:296] duration metric: took 135.248885ms for postStartSetup
	I0315 06:29:35.853656   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.853968   33437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:29:35.853999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.857046   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857515   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.857538   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857740   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.857904   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.858174   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.858327   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:29:35.939552   33437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:29:35.939581   33437 fix.go:56] duration metric: took 1m35.76003955s for fixHost
	I0315 06:29:35.939603   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.942284   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942621   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.942656   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942842   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.943040   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943209   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943341   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.943527   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:29:35.943686   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:29:35.943696   33437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:29:36.045713   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710484176.008370872
	
	I0315 06:29:36.045741   33437 fix.go:216] guest clock: 1710484176.008370872
	I0315 06:29:36.045749   33437 fix.go:229] Guest: 2024-03-15 06:29:36.008370872 +0000 UTC Remote: 2024-03-15 06:29:35.939588087 +0000 UTC m=+95.917046644 (delta=68.782785ms)
	I0315 06:29:36.045784   33437 fix.go:200] guest clock delta is within tolerance: 68.782785ms
	I0315 06:29:36.045790   33437 start.go:83] releasing machines lock for "ha-866665", held for 1m35.866260772s
	I0315 06:29:36.045808   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.046095   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:29:36.048748   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049090   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.049125   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049284   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.049937   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050138   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050191   33437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:29:36.050244   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.050342   33437 ssh_runner.go:195] Run: cat /version.json
	I0315 06:29:36.050361   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.053057   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053439   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053473   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053529   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053632   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.053795   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.053957   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.053958   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053983   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.054117   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.054145   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.054330   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.054470   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.054647   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.174816   33437 ssh_runner.go:195] Run: systemctl --version
	I0315 06:29:36.181715   33437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:29:36.358096   33437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:29:36.367581   33437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:29:36.367659   33437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:29:36.383454   33437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:29:36.383480   33437 start.go:494] detecting cgroup driver to use...
	I0315 06:29:36.383550   33437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:29:36.407514   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:29:36.425757   33437 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:29:36.425807   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:29:36.448161   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:29:36.466873   33437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:29:36.634934   33437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:29:36.809139   33437 docker.go:233] disabling docker service ...
	I0315 06:29:36.809211   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:29:36.831715   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:29:36.847966   33437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:29:37.006211   33437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:29:37.162186   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:29:37.178537   33437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:29:37.200300   33437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:29:37.200368   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.212398   33437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:29:37.212455   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.223908   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.235824   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.247520   33437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:29:37.259008   33437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:29:37.269062   33437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:29:37.281152   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:29:37.434941   33437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:31:11.689384   33437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.254391387s)
	I0315 06:31:11.689430   33437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:31:11.689496   33437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:31:11.698102   33437 start.go:562] Will wait 60s for crictl version
	I0315 06:31:11.698154   33437 ssh_runner.go:195] Run: which crictl
	I0315 06:31:11.702605   33437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:31:11.746302   33437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:31:11.746373   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.777004   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.813410   33437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:31:11.815123   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:31:11.818257   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818696   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:31:11.818717   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818982   33437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:31:11.824253   33437 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:31:11.824379   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:31:11.824419   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.877400   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.877423   33437 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:31:11.877466   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.913358   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.913383   33437 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:31:11.913393   33437 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:31:11.913524   33437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:31:11.913604   33437 ssh_runner.go:195] Run: crio config
	I0315 06:31:11.961648   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:31:11.961666   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:31:11.961674   33437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:31:11.961692   33437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:31:11.961854   33437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:31:11.961877   33437 kube-vip.go:111] generating kube-vip config ...
	I0315 06:31:11.961925   33437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:31:11.974783   33437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:31:11.974879   33437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:31:11.974928   33437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:31:11.985708   33437 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:31:11.985779   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:31:11.996849   33437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:31:12.015133   33437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:31:12.032728   33437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:31:12.050473   33437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:31:12.071279   33437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:31:12.075748   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:31:12.239653   33437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:31:12.288563   33437 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:31:12.288592   33437 certs.go:194] generating shared ca certs ...
	I0315 06:31:12.288612   33437 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.288830   33437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:31:12.288895   33437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:31:12.288911   33437 certs.go:256] generating profile certs ...
	I0315 06:31:12.289016   33437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:31:12.289054   33437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211
	I0315 06:31:12.289075   33437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:31:12.459406   33437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 ...
	I0315 06:31:12.459436   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211: {Name:mkce2140c17c76a43eac310ec6de314aee20f623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459623   33437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 ...
	I0315 06:31:12.459635   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211: {Name:mk03c3e33d6e2b84dc52dfa74e4afefa164f8f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459705   33437 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:31:12.459841   33437 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:31:12.459958   33437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:31:12.459972   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:31:12.459985   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:31:12.459995   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:31:12.460005   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:31:12.460016   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:31:12.460025   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:31:12.460039   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:31:12.460056   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:31:12.460105   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:31:12.460142   33437 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:31:12.460148   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:31:12.460168   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:31:12.460186   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:31:12.460202   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:31:12.460236   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:31:12.460264   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:12.460276   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:31:12.460287   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:31:12.460824   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:31:12.560429   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:31:12.636236   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:31:12.785307   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:31:12.994552   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 06:31:13.094315   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:31:13.148757   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:31:13.186255   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:31:13.219468   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:31:13.298432   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:31:13.356395   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:31:13.388275   33437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:31:13.416436   33437 ssh_runner.go:195] Run: openssl version
	I0315 06:31:13.425310   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:31:13.441059   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446108   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446176   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.452449   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:31:13.463777   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:31:13.475540   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480658   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480725   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.490764   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:31:13.507483   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:31:13.522259   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527387   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527450   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.535340   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:31:13.547679   33437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:31:13.555390   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:31:13.561596   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:31:13.569097   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:31:13.575180   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:31:13.581211   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:31:13.589108   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:31:13.597196   33437 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:31:13.597364   33437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:31:13.597432   33437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:31:13.654360   33437 cri.go:89] found id: "a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703"
	I0315 06:31:13.654388   33437 cri.go:89] found id: "f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a"
	I0315 06:31:13.654393   33437 cri.go:89] found id: "4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2"
	I0315 06:31:13.654401   33437 cri.go:89] found id: "53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869"
	I0315 06:31:13.654406   33437 cri.go:89] found id: "bdde9d3309aa653bdde9bc5fb009352128cc082c6210723aabf3090316773af4"
	I0315 06:31:13.654411   33437 cri.go:89] found id: "e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a"
	I0315 06:31:13.654414   33437 cri.go:89] found id: "e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	I0315 06:31:13.654418   33437 cri.go:89] found id: "cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7"
	I0315 06:31:13.654422   33437 cri.go:89] found id: "5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	I0315 06:31:13.654429   33437 cri.go:89] found id: "a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe"
	I0315 06:31:13.654434   33437 cri.go:89] found id: "e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306"
	I0315 06:31:13.654438   33437 cri.go:89] found id: "c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e"
	I0315 06:31:13.654447   33437 cri.go:89] found id: "950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088"
	I0315 06:31:13.654451   33437 cri.go:89] found id: "a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2"
	I0315 06:31:13.654473   33437 cri.go:89] found id: "20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a"
	I0315 06:31:13.654481   33437 cri.go:89] found id: "002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c"
	I0315 06:31:13.654485   33437 cri.go:89] found id: "f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c"
	I0315 06:31:13.654494   33437 cri.go:89] found id: "8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de"
	I0315 06:31:13.654501   33437 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:31:13.654506   33437 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:31:13.654513   33437 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:31:13.654517   33437 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:31:13.654524   33437 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:31:13.654528   33437 cri.go:89] found id: ""
	I0315 06:31:13.654578   33437 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.355766781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484538355736051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac074489-4fbd-4871-86cd-01f436fed1d1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.356716179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=575b06ed-b6de-4fed-9687-d5dca5840e95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.356771050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=575b06ed-b6de-4fed-9687-d5dca5840e95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.357153136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=575b06ed-b6de-4fed-9687-d5dca5840e95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.411099815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bc95374-3be8-4924-989a-6555a0bcb8e0 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.411174974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bc95374-3be8-4924-989a-6555a0bcb8e0 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.412979452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19338065-ce3d-4668-9d79-6be33d5ab5bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.413514898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484538413491178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19338065-ce3d-4668-9d79-6be33d5ab5bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.414057488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e64d9bd-54aa-43d1-92d1-d1f470bec160 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.414113850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e64d9bd-54aa-43d1-92d1-d1f470bec160 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.414583093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e64d9bd-54aa-43d1-92d1-d1f470bec160 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.461373132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c244a1-cf73-4cde-8d56-58ec58c96cf3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.461468490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c244a1-cf73-4cde-8d56-58ec58c96cf3 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.463540051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81d2dea2-c958-4e3e-b2aa-f64e7493d617 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.464714059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484538464682749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81d2dea2-c958-4e3e-b2aa-f64e7493d617 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.465382057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2df5f215-e10d-48ea-a75d-b294ca76eaaa name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.465444038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2df5f215-e10d-48ea-a75d-b294ca76eaaa name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.465810222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2df5f215-e10d-48ea-a75d-b294ca76eaaa name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.506529759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0d7a535-1450-48ae-b394-e5f7b0e9fe8e name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.506624792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0d7a535-1450-48ae-b394-e5f7b0e9fe8e name=/runtime.v1.RuntimeService/Version
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.509705271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1075dcb-bcdb-4e14-a16d-9c32e16ac3d6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.510159418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484538510134422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1075dcb-bcdb-4e14-a16d-9c32e16ac3d6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.510691536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af8843bd-7ae0-4611-b068-a7693ca1983a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.510782732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af8843bd-7ae0-4611-b068-a7693ca1983a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:35:38 ha-866665 crio[6969]: time="2024-03-15 06:35:38.511171935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\
"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e
5d443d53332f70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[
string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 7b2cc69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\"
,\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
2b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,
},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af8843bd-7ae0-4611-b068-a7693ca1983a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f4c9a0644c94       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   About a minute ago   Exited              kube-controller-manager   5                   16c817ba7d264       kube-controller-manager-ha-866665
	8568bece5a49f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   3 minutes ago        Running             busybox                   2                   1a1fdba5ec224       busybox-5b5d89c9d6-82knb
	10518fb395cce       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   4 minutes ago        Exited              kindnet-cni               5                   d7a69a3a337af       kindnet-9nvvx
	fd937a8a91dec       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   4 minutes ago        Running             kube-proxy                2                   9f2fb2d671096       kube-proxy-sbxgg
	f31b9e9704e22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 minutes ago        Exited              storage-provisioner       6                   a0a76006f9e8a       storage-provisioner
	959c3adf756ac       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   4 minutes ago        Running             etcd                      2                   22072600a839b       etcd-ha-866665
	a3b8244e29a11       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 minutes ago        Running             coredns                   2                   cf57c2ff9f3b2       coredns-5dd5756b68-r57px
	f8f276ed61ae4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 minutes ago        Running             coredns                   2                   14af170de4b57       coredns-5dd5756b68-mgthb
	4632db4347aeb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   4 minutes ago        Running             kube-vip                  5                   09c39329a8da2       kube-vip-ha-866665
	53858589abe09       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   4 minutes ago        Running             kube-scheduler            2                   7f2d26260bc13       kube-scheduler-ha-866665
	e4370ef8479c8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   8 minutes ago        Exited              kube-apiserver            4                   f7b655acbd708       kube-apiserver-ha-866665
	cb4635f3b41c2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   9 minutes ago        Exited              kube-vip                  4                   b3fef0e73d7bb       kube-vip-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   12 minutes ago       Exited              busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago       Exited              coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago       Exited              kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago       Exited              etcd                      1                   79337bac30908       etcd-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago       Exited              kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago       Exited              coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	Trace[800224354]: ---"Objects listed" error:Unauthorized 12252ms (06:27:21.915)
	Trace[800224354]: [12.252932189s] [12.252932189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[532336764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:10.595) (total time: 11322ms):
	Trace[532336764]: ---"Objects listed" error:Unauthorized 11321ms (06:27:21.916)
	Trace[532336764]: [11.322096854s] [11.322096854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1149679676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:25.734) (total time: 10173ms):
	Trace[1149679676]: ---"Objects listed" error:Unauthorized 10171ms (06:27:35.906)
	Trace[1149679676]: [10.173827374s] [10.173827374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40115 - 43756 "HINFO IN 2717951387138798829.7180821467390164679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010608616s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[652911396]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.712) (total time: 11205ms):
	Trace[652911396]: ---"Objects listed" error:Unauthorized 11205ms (06:27:35.918)
	Trace[652911396]: [11.205453768s] [11.205453768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[659281961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.824) (total time: 11093ms):
	Trace[659281961]: ---"Objects listed" error:Unauthorized 11093ms (06:27:35.918)
	Trace[659281961]: [11.093964434s] [11.093964434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47355 - 18813 "HINFO IN 6697230244971142980.3977120107732033871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009991661s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	[Mar15 06:29] systemd-fstab-generator[6878]: Ignoring "noauto" option for root device
	[  +0.182963] systemd-fstab-generator[6890]: Ignoring "noauto" option for root device
	[  +0.207318] systemd-fstab-generator[6904]: Ignoring "noauto" option for root device
	[  +0.160914] systemd-fstab-generator[6916]: Ignoring "noauto" option for root device
	[  +0.264473] systemd-fstab-generator[6940]: Ignoring "noauto" option for root device
	[Mar15 06:31] systemd-fstab-generator[7068]: Ignoring "noauto" option for root device
	[  +0.099499] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.728671] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.581573] kauditd_printk_skb: 34 callbacks suppressed
	[Mar15 06:32] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"info","ts":"2024-03-15T06:27:58.935705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.93572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"warn","ts":"2024-03-15T06:27:59.520795Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"af74041eca695613","rtt":"9.002373ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:27:59.521075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"af74041eca695613","rtt":"1.272856ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"info","ts":"2024-03-15T06:28:00.636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.991925Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T06:28:00.992032Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-03-15T06:28:00.992202Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.992285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994802Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:28:00.994881Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:28:00.99506Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995086Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995114Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995281Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995331Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995423Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995436Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:01.013373Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.013771Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.01388Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> etcd [959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8] <==
	{"level":"info","ts":"2024-03-15T06:31:59.650477Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:31:59.664413Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"af74041eca695613","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T06:31:59.664504Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:31:59.6685Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"af74041eca695613","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T06:31:59.668598Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:32:00.258901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.258974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.260102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:32:00.260177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-15T06:32:00.260314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.260345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.260375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgVote request to af74041eca695613 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from af74041eca695613 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-03-15T06:32:00.266182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.266194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 5"}
	{"level":"info","ts":"2024-03-15T06:32:00.273166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:ha-866665 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:32:00.273534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:32:00.275869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:32:00.275964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:32:00.277324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-03-15T06:32:00.277644Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:32:00.277708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 06:35:39 up 25 min,  0 users,  load average: 0.10, 0.19, 0.27
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54] <==
	I0315 06:31:18.825608       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:31:19.125113       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:19.440688       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:22.512472       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:24.514025       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:27.514923       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596] <==
	W0315 06:27:50.003349       1 reflector.go:535] storage/cacher.go:/volumeattachments: failed to list *storage.VolumeAttachment: etcdserver: request timed out
	I0315 06:27:50.003365       1 trace.go:236] Trace[731904531]: "Reflector ListAndWatch" name:storage/cacher.go:/volumeattachments (15-Mar-2024 06:27:36.912) (total time: 13090ms):
	Trace[731904531]: ---"Objects listed" error:etcdserver: request timed out 13090ms (06:27:50.003)
	Trace[731904531]: [13.090756124s] [13.090756124s] END
	E0315 06:27:50.003369       1 cacher.go:470] cacher (volumeattachments.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.VolumeAttachment: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003380       1 reflector.go:535] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	I0315 06:27:50.003393       1 trace.go:236] Trace[592093521]: "Reflector ListAndWatch" name:storage/cacher.go:/prioritylevelconfigurations (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[592093521]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[592093521]: [13.09928422s] [13.09928422s] END
	E0315 06:27:50.003397       1 cacher.go:470] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003439       1 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	I0315 06:27:50.003456       1 trace.go:236] Trace[746771974]: "Reflector ListAndWatch" name:storage/cacher.go:/poddisruptionbudgets (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[746771974]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[746771974]: [13.099292505s] [13.099292505s] END
	E0315 06:27:50.003482       1 cacher.go:470] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003501       1 reflector.go:535] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out
	I0315 06:27:50.003534       1 trace.go:236] Trace[1529918640]: "Reflector ListAndWatch" name:storage/cacher.go:/flowschemas (15-Mar-2024 06:27:36.900) (total time: 13103ms):
	Trace[1529918640]: ---"Objects listed" error:etcdserver: request timed out 13103ms (06:27:50.003)
	Trace[1529918640]: [13.10350995s] [13.10350995s] END
	E0315 06:27:50.003539       1 cacher.go:470] cacher (flowschemas.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003551       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	I0315 06:27:50.003567       1 trace.go:236] Trace[1142995160]: "Reflector ListAndWatch" name:storage/cacher.go:/serviceaccounts (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[1142995160]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[1142995160]: [13.099504673s] [13.099504673s] END
	E0315 06:27:50.003590       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd] <==
	I0315 06:34:01.115770       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:34:02.015947       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:34:02.015993       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:34:02.025496       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:34:02.025650       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:34:02.026071       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:34:02.026580       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:34:12.027833       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	E0315 06:26:01.167720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:01.167669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:01.167846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:04.241012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:04.241169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.456306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.456506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.457153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.457208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:16.532382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:16.532484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:34.959861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:34.959939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:14.897385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:14.897592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:17.967871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:17.968263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:21.039682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:21.039794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:54.832570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:54.832649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5] <==
	E0315 06:31:42.162745       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:03.665169       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:03.665837       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0315 06:32:03.704165       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:32:03.704332       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:32:03.707769       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:32:03.707920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:32:03.709020       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:32:03.709101       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:32:03.711683       1 config.go:188] "Starting service config controller"
	I0315 06:32:03.711754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:32:03.711789       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:32:03.711825       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:32:03.712712       1 config.go:315] "Starting node config controller"
	I0315 06:32:03.712754       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0315 06:32:06.736537       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0315 06:32:06.736941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:07.612784       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:32:08.012336       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:32:08.013310       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869] <==
	E0315 06:35:02.433064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:04.291643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:04.291746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:08.964028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:08.964097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:14.007535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:14.007679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:16.430156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:16.430457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:18.622486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:18.622559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.78:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:23.506725       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:23.506825       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:23.886417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:23.886503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:28.295205       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:28.295388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:31.864716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:31.864806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:35.412598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:35.412708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:37.486053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.78:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:37.486164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.78:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:35:38.745098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:35:38.745152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	W0315 06:27:33.721173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:27:33.721276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:27:34.581491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:34.581544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:36.411769       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:27:36.411881       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:27:36.473470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:27:36.473532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:27:37.175018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:27:37.175090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:27:38.621446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:27:38.621559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:27:39.985765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:39.985857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:41.948412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:41.948471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:58.053579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.053849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:58.945885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.945942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:28:00.884506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:28:00.884572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	I0315 06:28:00.994442       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:28:00.994493       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:28:00.994655       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 06:35:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: I0315 06:35:10.545742    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.546498    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.549941    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.549982    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.550024    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:10 ha-866665 kubelet[1369]: E0315 06:35:10.550095    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:35:13 ha-866665 kubelet[1369]: I0315 06:35:13.544853    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:35:13 ha-866665 kubelet[1369]: E0315 06:35:13.545764    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:35:15 ha-866665 kubelet[1369]: I0315 06:35:15.544672    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:35:15 ha-866665 kubelet[1369]: E0315 06:35:15.545381    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: I0315 06:35:24.544088    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: E0315 06:35:24.544466    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: I0315 06:35:24.544560    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:35:24 ha-866665 kubelet[1369]: E0315 06:35:24.544801    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.550967    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551026    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551048    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:35:25 ha-866665 kubelet[1369]: E0315 06:35:25.551114    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:35:29 ha-866665 kubelet[1369]: I0315 06:35:29.544575    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:35:29 ha-866665 kubelet[1369]: E0315 06:35:29.544964    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	Mar 15 06:35:38 ha-866665 kubelet[1369]: I0315 06:35:38.544547    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:35:38 ha-866665 kubelet[1369]: E0315 06:35:38.544767    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:35:38 ha-866665 kubelet[1369]: I0315 06:35:38.544900    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:35:38 ha-866665 kubelet[1369]: E0315 06:35:38.545135    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:35:38.073367   34919 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665: exit status 2 (239.187128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-866665" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (58.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-866665 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-866665 --control-plane -v=7 --alsologtostderr: exit status 80 (56.108458224s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-866665 as [worker control-plane]
	* Starting "ha-866665-m05" control-plane node in "ha-866665" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:35:39.741118   34974 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:35:39.741361   34974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:35:39.741369   34974 out.go:304] Setting ErrFile to fd 2...
	I0315 06:35:39.741373   34974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:35:39.741565   34974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:35:39.741810   34974 mustload.go:65] Loading cluster: ha-866665
	I0315 06:35:39.742185   34974 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:35:39.742586   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:35:39.742634   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:35:39.757421   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0315 06:35:39.757912   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:35:39.758477   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:35:39.758508   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:35:39.758860   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:35:39.759056   34974 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:35:39.760844   34974 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:35:39.761213   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:35:39.761260   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:35:39.775643   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0315 06:35:39.776026   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:35:39.776451   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:35:39.776493   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:35:39.776782   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:35:39.776985   34974 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:35:39.777462   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:35:39.777502   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:35:39.791529   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43805
	I0315 06:35:39.791912   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:35:39.792367   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:35:39.792387   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:35:39.792742   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:35:39.792932   34974 main.go:141] libmachine: (ha-866665-m02) Calling .GetState
	I0315 06:35:39.794486   34974 host.go:66] Checking if "ha-866665-m02" exists ...
	I0315 06:35:39.794799   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:35:39.794852   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:35:39.809410   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37615
	I0315 06:35:39.809882   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:35:39.810471   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:35:39.810513   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:35:39.810797   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:35:39.811007   34974 main.go:141] libmachine: (ha-866665-m02) Calling .DriverName
	I0315 06:35:39.811148   34974 api_server.go:166] Checking apiserver status ...
	I0315 06:35:39.811204   34974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:35:39.811247   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:35:39.813900   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:35:39.814242   34974 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:35:39.814275   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:35:39.814386   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:35:39.814554   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:35:39.814706   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:35:39.814843   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:35:39.898847   34974 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0315 06:35:39.899164   34974 out.go:239] ! The control-plane node ha-866665 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-866665 apiserver is not running (will try others): (state=Stopped)
	I0315 06:35:39.899183   34974 api_server.go:166] Checking apiserver status ...
	I0315 06:35:39.899227   34974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:35:39.899257   34974 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHHostname
	I0315 06:35:39.902225   34974 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:35:39.902682   34974 main.go:141] libmachine: (ha-866665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:e0:d5", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:31:25 +0000 UTC Type:0 Mac:52:54:00:fa:e0:d5 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-866665-m02 Clientid:01:52:54:00:fa:e0:d5}
	I0315 06:35:39.902725   34974 main.go:141] libmachine: (ha-866665-m02) DBG | domain ha-866665-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:fa:e0:d5 in network mk-ha-866665
	I0315 06:35:39.902900   34974 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHPort
	I0315 06:35:39.903108   34974 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHKeyPath
	I0315 06:35:39.903296   34974 main.go:141] libmachine: (ha-866665-m02) Calling .GetSSHUsername
	I0315 06:35:39.903458   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m02/id_rsa Username:docker}
	I0315 06:35:39.992256   34974 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0315 06:35:40.007400   34974 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:35:40.007458   34974 ssh_runner.go:195] Run: ls
	I0315 06:35:40.012900   34974 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8443/healthz ...
	I0315 06:35:40.018957   34974 api_server.go:279] https://192.168.39.27:8443/healthz returned 200:
	ok
	I0315 06:35:40.021190   34974 out.go:177] * Adding node m05 to cluster ha-866665 as [worker control-plane]
	I0315 06:35:40.023124   34974 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:35:40.023241   34974 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:35:40.025050   34974 out.go:177] * Starting "ha-866665-m05" control-plane node in "ha-866665" cluster
	I0315 06:35:40.026348   34974 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:35:40.026383   34974 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:35:40.026396   34974 cache.go:56] Caching tarball of preloaded images
	I0315 06:35:40.026501   34974 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:35:40.026515   34974 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:35:40.026604   34974 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:35:40.026769   34974 start.go:360] acquireMachinesLock for ha-866665-m05: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:35:40.026810   34974 start.go:364] duration metric: took 21.45µs to acquireMachinesLock for "ha-866665-m05"
	I0315 06:35:40.026826   34974 start.go:93] Provisioning new machine with config: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:true Worker:true}
	I0315 06:35:40.026968   34974 start.go:125] createHost starting for "m05" (driver="kvm2")
	I0315 06:35:40.028713   34974 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 06:35:40.028842   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:35:40.028873   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:35:40.043581   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0315 06:35:40.043988   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:35:40.044439   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:35:40.044472   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:35:40.044750   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:35:40.044923   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetMachineName
	I0315 06:35:40.045061   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:35:40.045211   34974 start.go:159] libmachine.API.Create for "ha-866665" (driver="kvm2")
	I0315 06:35:40.045235   34974 client.go:168] LocalClient.Create starting
	I0315 06:35:40.045262   34974 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 06:35:40.045291   34974 main.go:141] libmachine: Decoding PEM data...
	I0315 06:35:40.045316   34974 main.go:141] libmachine: Parsing certificate...
	I0315 06:35:40.045362   34974 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 06:35:40.045380   34974 main.go:141] libmachine: Decoding PEM data...
	I0315 06:35:40.045391   34974 main.go:141] libmachine: Parsing certificate...
	I0315 06:35:40.045409   34974 main.go:141] libmachine: Running pre-create checks...
	I0315 06:35:40.045417   34974 main.go:141] libmachine: (ha-866665-m05) Calling .PreCreateCheck
	I0315 06:35:40.045602   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetConfigRaw
	I0315 06:35:40.045983   34974 main.go:141] libmachine: Creating machine...
	I0315 06:35:40.046003   34974 main.go:141] libmachine: (ha-866665-m05) Calling .Create
	I0315 06:35:40.046140   34974 main.go:141] libmachine: (ha-866665-m05) Creating KVM machine...
	I0315 06:35:40.047399   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found existing default KVM network
	I0315 06:35:40.047569   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found existing private KVM network mk-ha-866665
	I0315 06:35:40.047718   34974 main.go:141] libmachine: (ha-866665-m05) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05 ...
	I0315 06:35:40.047739   34974 main.go:141] libmachine: (ha-866665-m05) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 06:35:40.047807   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:40.047694   35010 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:35:40.047923   34974 main.go:141] libmachine: (ha-866665-m05) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 06:35:40.263259   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:40.263142   35010 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa...
	I0315 06:35:40.415384   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:40.415245   35010 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/ha-866665-m05.rawdisk...
	I0315 06:35:40.415432   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Writing magic tar header
	I0315 06:35:40.415450   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Writing SSH key tar header
	I0315 06:35:40.415463   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:40.415412   35010 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05 ...
	I0315 06:35:40.415575   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05
	I0315 06:35:40.415618   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 06:35:40.415647   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05 (perms=drwx------)
	I0315 06:35:40.415677   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 06:35:40.415692   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:35:40.415702   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 06:35:40.415716   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 06:35:40.415729   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 06:35:40.415744   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home/jenkins
	I0315 06:35:40.415756   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Checking permissions on dir: /home
	I0315 06:35:40.415772   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Skipping /home - not owner
	I0315 06:35:40.415787   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 06:35:40.415800   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 06:35:40.415827   34974 main.go:141] libmachine: (ha-866665-m05) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 06:35:40.415847   34974 main.go:141] libmachine: (ha-866665-m05) Creating domain...
	I0315 06:35:40.416938   34974 main.go:141] libmachine: (ha-866665-m05) define libvirt domain using xml: 
	I0315 06:35:40.416960   34974 main.go:141] libmachine: (ha-866665-m05) <domain type='kvm'>
	I0315 06:35:40.416972   34974 main.go:141] libmachine: (ha-866665-m05)   <name>ha-866665-m05</name>
	I0315 06:35:40.416983   34974 main.go:141] libmachine: (ha-866665-m05)   <memory unit='MiB'>2200</memory>
	I0315 06:35:40.416995   34974 main.go:141] libmachine: (ha-866665-m05)   <vcpu>2</vcpu>
	I0315 06:35:40.417005   34974 main.go:141] libmachine: (ha-866665-m05)   <features>
	I0315 06:35:40.417014   34974 main.go:141] libmachine: (ha-866665-m05)     <acpi/>
	I0315 06:35:40.417024   34974 main.go:141] libmachine: (ha-866665-m05)     <apic/>
	I0315 06:35:40.417033   34974 main.go:141] libmachine: (ha-866665-m05)     <pae/>
	I0315 06:35:40.417042   34974 main.go:141] libmachine: (ha-866665-m05)     
	I0315 06:35:40.417051   34974 main.go:141] libmachine: (ha-866665-m05)   </features>
	I0315 06:35:40.417062   34974 main.go:141] libmachine: (ha-866665-m05)   <cpu mode='host-passthrough'>
	I0315 06:35:40.417070   34974 main.go:141] libmachine: (ha-866665-m05)   
	I0315 06:35:40.417103   34974 main.go:141] libmachine: (ha-866665-m05)   </cpu>
	I0315 06:35:40.417112   34974 main.go:141] libmachine: (ha-866665-m05)   <os>
	I0315 06:35:40.417117   34974 main.go:141] libmachine: (ha-866665-m05)     <type>hvm</type>
	I0315 06:35:40.417126   34974 main.go:141] libmachine: (ha-866665-m05)     <boot dev='cdrom'/>
	I0315 06:35:40.417130   34974 main.go:141] libmachine: (ha-866665-m05)     <boot dev='hd'/>
	I0315 06:35:40.417138   34974 main.go:141] libmachine: (ha-866665-m05)     <bootmenu enable='no'/>
	I0315 06:35:40.417142   34974 main.go:141] libmachine: (ha-866665-m05)   </os>
	I0315 06:35:40.417150   34974 main.go:141] libmachine: (ha-866665-m05)   <devices>
	I0315 06:35:40.417155   34974 main.go:141] libmachine: (ha-866665-m05)     <disk type='file' device='cdrom'>
	I0315 06:35:40.417165   34974 main.go:141] libmachine: (ha-866665-m05)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/boot2docker.iso'/>
	I0315 06:35:40.417173   34974 main.go:141] libmachine: (ha-866665-m05)       <target dev='hdc' bus='scsi'/>
	I0315 06:35:40.417178   34974 main.go:141] libmachine: (ha-866665-m05)       <readonly/>
	I0315 06:35:40.417184   34974 main.go:141] libmachine: (ha-866665-m05)     </disk>
	I0315 06:35:40.417190   34974 main.go:141] libmachine: (ha-866665-m05)     <disk type='file' device='disk'>
	I0315 06:35:40.417198   34974 main.go:141] libmachine: (ha-866665-m05)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 06:35:40.417232   34974 main.go:141] libmachine: (ha-866665-m05)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/ha-866665-m05.rawdisk'/>
	I0315 06:35:40.417256   34974 main.go:141] libmachine: (ha-866665-m05)       <target dev='hda' bus='virtio'/>
	I0315 06:35:40.417271   34974 main.go:141] libmachine: (ha-866665-m05)     </disk>
	I0315 06:35:40.417281   34974 main.go:141] libmachine: (ha-866665-m05)     <interface type='network'>
	I0315 06:35:40.417292   34974 main.go:141] libmachine: (ha-866665-m05)       <source network='mk-ha-866665'/>
	I0315 06:35:40.417303   34974 main.go:141] libmachine: (ha-866665-m05)       <model type='virtio'/>
	I0315 06:35:40.417315   34974 main.go:141] libmachine: (ha-866665-m05)     </interface>
	I0315 06:35:40.417326   34974 main.go:141] libmachine: (ha-866665-m05)     <interface type='network'>
	I0315 06:35:40.417337   34974 main.go:141] libmachine: (ha-866665-m05)       <source network='default'/>
	I0315 06:35:40.417349   34974 main.go:141] libmachine: (ha-866665-m05)       <model type='virtio'/>
	I0315 06:35:40.417358   34974 main.go:141] libmachine: (ha-866665-m05)     </interface>
	I0315 06:35:40.417369   34974 main.go:141] libmachine: (ha-866665-m05)     <serial type='pty'>
	I0315 06:35:40.417381   34974 main.go:141] libmachine: (ha-866665-m05)       <target port='0'/>
	I0315 06:35:40.417408   34974 main.go:141] libmachine: (ha-866665-m05)     </serial>
	I0315 06:35:40.417421   34974 main.go:141] libmachine: (ha-866665-m05)     <console type='pty'>
	I0315 06:35:40.417433   34974 main.go:141] libmachine: (ha-866665-m05)       <target type='serial' port='0'/>
	I0315 06:35:40.417445   34974 main.go:141] libmachine: (ha-866665-m05)     </console>
	I0315 06:35:40.417455   34974 main.go:141] libmachine: (ha-866665-m05)     <rng model='virtio'>
	I0315 06:35:40.417474   34974 main.go:141] libmachine: (ha-866665-m05)       <backend model='random'>/dev/random</backend>
	I0315 06:35:40.417487   34974 main.go:141] libmachine: (ha-866665-m05)     </rng>
	I0315 06:35:40.417499   34974 main.go:141] libmachine: (ha-866665-m05)     
	I0315 06:35:40.417508   34974 main.go:141] libmachine: (ha-866665-m05)     
	I0315 06:35:40.417525   34974 main.go:141] libmachine: (ha-866665-m05)   </devices>
	I0315 06:35:40.417536   34974 main.go:141] libmachine: (ha-866665-m05) </domain>
	I0315 06:35:40.417546   34974 main.go:141] libmachine: (ha-866665-m05) 
	I0315 06:35:40.425620   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:98:ee:26 in network default
	I0315 06:35:40.426306   34974 main.go:141] libmachine: (ha-866665-m05) Ensuring networks are active...
	I0315 06:35:40.426341   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:40.427088   34974 main.go:141] libmachine: (ha-866665-m05) Ensuring network default is active
	I0315 06:35:40.427393   34974 main.go:141] libmachine: (ha-866665-m05) Ensuring network mk-ha-866665 is active
	I0315 06:35:40.427774   34974 main.go:141] libmachine: (ha-866665-m05) Getting domain xml...
	I0315 06:35:40.428529   34974 main.go:141] libmachine: (ha-866665-m05) Creating domain...
	I0315 06:35:41.635558   34974 main.go:141] libmachine: (ha-866665-m05) Waiting to get IP...
	I0315 06:35:41.636457   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:41.636912   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:41.636969   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:41.636897   35010 retry.go:31] will retry after 217.033636ms: waiting for machine to come up
	I0315 06:35:41.855526   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:41.855949   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:41.855978   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:41.855903   35010 retry.go:31] will retry after 255.291497ms: waiting for machine to come up
	I0315 06:35:42.112341   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:42.112857   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:42.112886   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:42.112811   35010 retry.go:31] will retry after 339.09908ms: waiting for machine to come up
	I0315 06:35:42.453455   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:42.454082   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:42.454104   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:42.454046   35010 retry.go:31] will retry after 424.365902ms: waiting for machine to come up
	I0315 06:35:42.879574   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:42.880046   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:42.880069   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:42.880009   35010 retry.go:31] will retry after 465.061661ms: waiting for machine to come up
	I0315 06:35:43.346586   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:43.347165   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:43.347193   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:43.347123   35010 retry.go:31] will retry after 781.164919ms: waiting for machine to come up
	I0315 06:35:44.130084   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:44.130465   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:44.130491   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:44.130435   35010 retry.go:31] will retry after 810.324067ms: waiting for machine to come up
	I0315 06:35:44.941914   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:44.942416   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:44.942443   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:44.942380   35010 retry.go:31] will retry after 1.364999014s: waiting for machine to come up
	I0315 06:35:46.309141   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:46.309580   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:46.309602   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:46.309548   35010 retry.go:31] will retry after 1.845228408s: waiting for machine to come up
	I0315 06:35:48.155967   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:48.156394   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:48.156420   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:48.156351   35010 retry.go:31] will retry after 2.231744569s: waiting for machine to come up
	I0315 06:35:50.389686   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:50.390106   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:50.390133   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:50.390080   35010 retry.go:31] will retry after 2.342846616s: waiting for machine to come up
	I0315 06:35:52.735507   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:52.735915   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:52.735941   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:52.735873   35010 retry.go:31] will retry after 2.60746044s: waiting for machine to come up
	I0315 06:35:55.345121   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:55.345593   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:55.345611   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:55.345558   35010 retry.go:31] will retry after 2.982023861s: waiting for machine to come up
	I0315 06:35:58.330735   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:35:58.331208   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find current IP address of domain ha-866665-m05 in network mk-ha-866665
	I0315 06:35:58.331239   34974 main.go:141] libmachine: (ha-866665-m05) DBG | I0315 06:35:58.331172   35010 retry.go:31] will retry after 4.836298379s: waiting for machine to come up
	I0315 06:36:03.169626   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.170094   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has current primary IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.170113   34974 main.go:141] libmachine: (ha-866665-m05) Found IP for machine: 192.168.39.188
	I0315 06:36:03.170125   34974 main.go:141] libmachine: (ha-866665-m05) Reserving static IP address...
	I0315 06:36:03.170572   34974 main.go:141] libmachine: (ha-866665-m05) DBG | unable to find host DHCP lease matching {name: "ha-866665-m05", mac: "52:54:00:de:b3:95", ip: "192.168.39.188"} in network mk-ha-866665
	I0315 06:36:03.244686   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Getting to WaitForSSH function...
	I0315 06:36:03.244715   34974 main.go:141] libmachine: (ha-866665-m05) Reserved static IP address: 192.168.39.188
	I0315 06:36:03.244729   34974 main.go:141] libmachine: (ha-866665-m05) Waiting for SSH to be available...
	I0315 06:36:03.247302   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.247690   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.247719   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.247901   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Using SSH client type: external
	I0315 06:36:03.247927   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa (-rw-------)
	I0315 06:36:03.247951   34974 main.go:141] libmachine: (ha-866665-m05) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:36:03.247965   34974 main.go:141] libmachine: (ha-866665-m05) DBG | About to run SSH command:
	I0315 06:36:03.247977   34974 main.go:141] libmachine: (ha-866665-m05) DBG | exit 0
	I0315 06:36:03.376568   34974 main.go:141] libmachine: (ha-866665-m05) DBG | SSH cmd err, output: <nil>: 
	I0315 06:36:03.376823   34974 main.go:141] libmachine: (ha-866665-m05) KVM machine creation complete!
	I0315 06:36:03.377148   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetConfigRaw
	I0315 06:36:03.377804   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:03.378010   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:03.378214   34974 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 06:36:03.378230   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetState
	I0315 06:36:03.379523   34974 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 06:36:03.379538   34974 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 06:36:03.379544   34974 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 06:36:03.379549   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.381794   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.382118   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.382144   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.382308   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:03.382480   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.382666   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.382793   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:03.382946   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:03.383237   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:03.383250   34974 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 06:36:03.488234   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:36:03.488265   34974 main.go:141] libmachine: Detecting the provisioner...
	I0315 06:36:03.488275   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.491401   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.491822   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.491852   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.492100   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:03.492322   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.492458   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.492661   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:03.492807   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:03.492972   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:03.492983   34974 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 06:36:03.601694   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 06:36:03.601755   34974 main.go:141] libmachine: found compatible host: buildroot
	I0315 06:36:03.601765   34974 main.go:141] libmachine: Provisioning with buildroot...
	I0315 06:36:03.601772   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetMachineName
	I0315 06:36:03.601980   34974 buildroot.go:166] provisioning hostname "ha-866665-m05"
	I0315 06:36:03.601992   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetMachineName
	I0315 06:36:03.602175   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.605003   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.605500   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.605528   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.605735   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:03.605910   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.606064   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.606216   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:03.606420   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:03.606600   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:03.606612   34974 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665-m05 && echo "ha-866665-m05" | sudo tee /etc/hostname
	I0315 06:36:03.728108   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665-m05
	
	I0315 06:36:03.728129   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.731117   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.731560   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.731582   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.731860   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:03.732051   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.732210   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.732359   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:03.732509   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:03.732669   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:03.732685   34974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:36:03.849937   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:36:03.849972   34974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:36:03.850008   34974 buildroot.go:174] setting up certificates
	I0315 06:36:03.850018   34974 provision.go:84] configureAuth start
	I0315 06:36:03.850056   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetMachineName
	I0315 06:36:03.850390   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetIP
	I0315 06:36:03.853035   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.853535   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.853566   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.853618   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.856236   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.856650   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.856672   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.856822   34974 provision.go:143] copyHostCerts
	I0315 06:36:03.856864   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:36:03.856912   34974 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:36:03.856923   34974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:36:03.857005   34974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:36:03.857106   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:36:03.857131   34974 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:36:03.857137   34974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:36:03.857184   34974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:36:03.857249   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:36:03.857272   34974 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:36:03.857281   34974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:36:03.857312   34974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:36:03.857374   34974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665-m05 san=[127.0.0.1 192.168.39.188 ha-866665-m05 localhost minikube]
	I0315 06:36:03.920399   34974 provision.go:177] copyRemoteCerts
	I0315 06:36:03.920448   34974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:36:03.920490   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:03.923199   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.923616   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:03.923639   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:03.923829   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:03.924022   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:03.924181   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:03.924325   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa Username:docker}
	I0315 06:36:04.007986   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:36:04.008081   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 06:36:04.032863   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:36:04.032924   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:36:04.057539   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:36:04.057609   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:36:04.082384   34974 provision.go:87] duration metric: took 232.33531ms to configureAuth
	I0315 06:36:04.082412   34974 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:36:04.082661   34974 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:36:04.082732   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:04.085564   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.086018   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.086047   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.086237   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:04.086418   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.086598   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.086740   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:04.086898   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:04.087162   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:04.087187   34974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:36:04.355384   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:36:04.355417   34974 main.go:141] libmachine: Checking connection to Docker...
	I0315 06:36:04.355424   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetURL
	I0315 06:36:04.356768   34974 main.go:141] libmachine: (ha-866665-m05) DBG | Using libvirt version 6000000
	I0315 06:36:04.358806   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.359180   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.359221   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.359367   34974 main.go:141] libmachine: Docker is up and running!
	I0315 06:36:04.359383   34974 main.go:141] libmachine: Reticulating splines...
	I0315 06:36:04.359389   34974 client.go:171] duration metric: took 24.314146033s to LocalClient.Create
	I0315 06:36:04.359410   34974 start.go:167] duration metric: took 24.314200644s to libmachine.API.Create "ha-866665"
	I0315 06:36:04.359429   34974 start.go:293] postStartSetup for "ha-866665-m05" (driver="kvm2")
	I0315 06:36:04.359440   34974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:36:04.359456   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:04.359687   34974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:36:04.359714   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:04.361911   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.362293   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.362316   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.362414   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:04.362589   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.362695   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:04.362839   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa Username:docker}
	I0315 06:36:04.447586   34974 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:36:04.452200   34974 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:36:04.452225   34974 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:36:04.452283   34974 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:36:04.452360   34974 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:36:04.452371   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:36:04.452451   34974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:36:04.461892   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:36:04.486685   34974 start.go:296] duration metric: took 127.240716ms for postStartSetup
	I0315 06:36:04.486736   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetConfigRaw
	I0315 06:36:04.487263   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetIP
	I0315 06:36:04.489880   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.490278   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.490296   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.490616   34974 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:36:04.490856   34974 start.go:128] duration metric: took 24.463874298s to createHost
	I0315 06:36:04.490882   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:04.492973   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.493327   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.493358   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.493552   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:04.493724   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.493905   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.494047   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:04.494182   34974 main.go:141] libmachine: Using SSH client type: native
	I0315 06:36:04.494350   34974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0315 06:36:04.494360   34974 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 06:36:04.601394   34974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710484564.574436011
	
	I0315 06:36:04.601412   34974 fix.go:216] guest clock: 1710484564.574436011
	I0315 06:36:04.601422   34974 fix.go:229] Guest: 2024-03-15 06:36:04.574436011 +0000 UTC Remote: 2024-03-15 06:36:04.490869374 +0000 UTC m=+24.798758702 (delta=83.566637ms)
	I0315 06:36:04.601447   34974 fix.go:200] guest clock delta is within tolerance: 83.566637ms
	I0315 06:36:04.601454   34974 start.go:83] releasing machines lock for "ha-866665-m05", held for 24.574634577s
	I0315 06:36:04.601477   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:04.601743   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetIP
	I0315 06:36:04.604292   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.604756   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.604786   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.604966   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:04.605528   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:04.605760   34974 main.go:141] libmachine: (ha-866665-m05) Calling .DriverName
	I0315 06:36:04.605882   34974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:36:04.605923   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:04.606000   34974 ssh_runner.go:195] Run: systemctl --version
	I0315 06:36:04.606023   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHHostname
	I0315 06:36:04.608493   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.608857   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.608888   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.608907   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.609173   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:04.609345   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.609471   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:04.609515   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:04.609542   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:04.609626   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa Username:docker}
	I0315 06:36:04.609741   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHPort
	I0315 06:36:04.609927   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHKeyPath
	I0315 06:36:04.610087   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetSSHUsername
	I0315 06:36:04.610234   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665-m05/id_rsa Username:docker}
	I0315 06:36:04.731005   34974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:36:04.897785   34974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:36:04.905309   34974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:36:04.905394   34974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:36:04.922865   34974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:36:04.922893   34974 start.go:494] detecting cgroup driver to use...
	I0315 06:36:04.922950   34974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:36:04.939441   34974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:36:04.953717   34974 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:36:04.953779   34974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:36:04.968286   34974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:36:04.982721   34974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:36:05.111528   34974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:36:05.272185   34974 docker.go:233] disabling docker service ...
	I0315 06:36:05.272252   34974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:36:05.287643   34974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:36:05.301193   34974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:36:05.440017   34974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:36:05.586744   34974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:36:05.603252   34974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:36:05.623459   34974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:36:05.623520   34974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:36:05.634637   34974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:36:05.634722   34974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:36:05.646123   34974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:36:05.657360   34974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:36:05.669532   34974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:36:05.681328   34974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:36:05.691724   34974 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:36:05.691785   34974 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:36:05.709033   34974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:36:05.719890   34974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:36:05.856645   34974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:36:06.002947   34974 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:36:06.003025   34974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:36:06.008978   34974 start.go:562] Will wait 60s for crictl version
	I0315 06:36:06.009059   34974 ssh_runner.go:195] Run: which crictl
	I0315 06:36:06.013096   34974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:36:06.057213   34974 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:36:06.057291   34974 ssh_runner.go:195] Run: crio --version
	I0315 06:36:06.089461   34974 ssh_runner.go:195] Run: crio --version
	I0315 06:36:06.122760   34974 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:36:06.123951   34974 main.go:141] libmachine: (ha-866665-m05) Calling .GetIP
	I0315 06:36:06.126632   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:06.127033   34974 main.go:141] libmachine: (ha-866665-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:b3:95", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:35:54 +0000 UTC Type:0 Mac:52:54:00:de:b3:95 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-866665-m05 Clientid:01:52:54:00:de:b3:95}
	I0315 06:36:06.127055   34974 main.go:141] libmachine: (ha-866665-m05) DBG | domain ha-866665-m05 has defined IP address 192.168.39.188 and MAC address 52:54:00:de:b3:95 in network mk-ha-866665
	I0315 06:36:06.127256   34974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:36:06.131536   34974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:36:06.145057   34974 mustload.go:65] Loading cluster: ha-866665
	I0315 06:36:06.145313   34974 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:36:06.145626   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:36:06.145669   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:36:06.160375   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0315 06:36:06.160813   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:36:06.161322   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:36:06.161350   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:36:06.161677   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:36:06.161894   34974 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:36:06.163618   34974 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:36:06.163907   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:36:06.163943   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:36:06.178540   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0315 06:36:06.178997   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:36:06.179429   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:36:06.179450   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:36:06.179797   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:36:06.179969   34974 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:36:06.180207   34974 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.188
	I0315 06:36:06.180216   34974 certs.go:194] generating shared ca certs ...
	I0315 06:36:06.180229   34974 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:36:06.180334   34974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:36:06.180370   34974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:36:06.180378   34974 certs.go:256] generating profile certs ...
	I0315 06:36:06.180441   34974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:36:06.180479   34974 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b2feaadc
	I0315 06:36:06.180496   34974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b2feaadc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.188 192.168.39.254]
	I0315 06:36:06.236028   34974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b2feaadc ...
	I0315 06:36:06.236060   34974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b2feaadc: {Name:mkac8433c6d128da47cc0dd8af617d092f85cf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:36:06.236223   34974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b2feaadc ...
	I0315 06:36:06.236235   34974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b2feaadc: {Name:mk28b4f00591063bc6df752331e1f476a6cd179f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:36:06.236306   34974 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.b2feaadc -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:36:06.236458   34974 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.b2feaadc -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:36:06.236634   34974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:36:06.236649   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:36:06.236661   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:36:06.236676   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:36:06.236693   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:36:06.236706   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:36:06.236718   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:36:06.236729   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:36:06.236740   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:36:06.236786   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:36:06.236813   34974 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:36:06.236823   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:36:06.236844   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:36:06.236871   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:36:06.236891   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:36:06.236942   34974 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:36:06.236984   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:36:06.237003   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:36:06.237020   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:36:06.237049   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:36:06.239970   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:36:06.240445   34974 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:36:06.240485   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:36:06.240667   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:36:06.240820   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:36:06.240975   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:36:06.241108   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:36:06.312966   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0315 06:36:06.318696   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 06:36:06.331643   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0315 06:36:06.336092   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 06:36:06.348880   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 06:36:06.353354   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 06:36:06.367402   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0315 06:36:06.372066   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0315 06:36:06.384657   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0315 06:36:06.389232   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 06:36:06.402383   34974 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0315 06:36:06.407393   34974 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0315 06:36:06.419553   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:36:06.446634   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:36:06.471208   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:36:06.496997   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:36:06.521115   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0315 06:36:06.547691   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:36:06.572925   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:36:06.599560   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:36:06.624216   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:36:06.648264   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:36:06.672278   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:36:06.695730   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 06:36:06.713380   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 06:36:06.730603   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 06:36:06.747378   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0315 06:36:06.764567   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 06:36:06.782666   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0315 06:36:06.800732   34974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 06:36:06.818445   34974 ssh_runner.go:195] Run: openssl version
	I0315 06:36:06.824103   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:36:06.835972   34974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:36:06.840966   34974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:36:06.841042   34974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:36:06.846804   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:36:06.858370   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:36:06.869714   34974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:36:06.874414   34974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:36:06.874475   34974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:36:06.880492   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:36:06.892429   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:36:06.905587   34974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:36:06.910329   34974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:36:06.910383   34974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:36:06.916157   34974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:36:06.930543   34974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:36:06.935339   34974 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 06:36:06.935405   34974 kubeadm.go:928] updating node {m05 192.168.39.188 8443 v1.28.4  true true} ...
	I0315 06:36:06.935547   34974 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:36:06.935576   34974 kube-vip.go:111] generating kube-vip config ...
	I0315 06:36:06.935606   34974 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:36:06.954730   34974 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:36:06.954857   34974 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:36:06.954930   34974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:36:06.966956   34974 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 06:36:06.967019   34974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 06:36:06.979449   34974 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0315 06:36:06.979460   34974 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0315 06:36:06.979482   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:36:06.979499   34974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:36:06.979459   34974 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 06:36:06.979551   34974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 06:36:06.979571   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:36:06.979651   34974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 06:36:06.998827   34974 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:36:06.998900   34974 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 06:36:06.998922   34974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 06:36:06.998927   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 06:36:06.998965   34974 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 06:36:06.998988   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 06:36:07.014936   34974 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 06:36:07.014978   34974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 06:36:08.014626   34974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 06:36:08.025816   34974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 06:36:08.044435   34974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:36:08.063022   34974 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:36:08.081579   34974 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:36:08.085989   34974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:36:08.100606   34974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:36:08.227220   34974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:36:08.247213   34974 host.go:66] Checking if "ha-866665" exists ...
	I0315 06:36:08.247538   34974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:36:08.247586   34974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:36:08.263239   34974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0315 06:36:08.263720   34974 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:36:08.264287   34974 main.go:141] libmachine: Using API Version  1
	I0315 06:36:08.264315   34974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:36:08.264642   34974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:36:08.264874   34974 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:36:08.265028   34974 start.go:316] joinCluster: &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:fals
e gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:36:08.265172   34974 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 06:36:08.265189   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:36:08.268246   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:36:08.268742   34974 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:36:08.268782   34974 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:36:08.268932   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:36:08.269136   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:36:08.269334   34974 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:36:08.269525   34974 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:36:08.435978   34974 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:true Worker:true}
	I0315 06:36:08.436037   34974 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rz7x1e.r0ktoapjwx3yuwr5 --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m05 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443"
	I0315 06:36:35.085222   34974 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rz7x1e.r0ktoapjwx3yuwr5 --discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-866665-m05 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443": (26.649158838s)
	I0315 06:36:35.085260   34974 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 06:36:35.664518   34974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m05 minikube.k8s.io/updated_at=2024_03_15T06_36_35_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false
	I0315 06:36:35.783636   34974 start.go:318] duration metric: took 27.518606018s to joinCluster
	I0315 06:36:35.785927   34974 out.go:177] 
	W0315 06:36:35.787433   34974 out.go:239] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying control-plane node "m05" label: apply node labels: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m05 minikube.k8s.io/updated_at=2024_03_15T06_36_35_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying control-plane node "m05" label: apply node labels: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-866665-m05 minikube.k8s.io/updated_at=2024_03_15T06_36_35_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=ha-866665 minikube.k8s.io/primary=false: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	W0315 06:36:35.787450   34974 out.go:239] * 
	* 
	W0315 06:36:35.789575   34974 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 06:36:35.791296   34974 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-866665 --control-plane -v=7 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-866665 -n ha-866665: exit status 2 (262.779566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-866665 logs -n 25: (1.679143586s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m04 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp testdata/cp-test.txt                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt                       |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665 sudo cat                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665.txt                                 |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m02 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | ha-866665-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-866665 ssh -n ha-866665-m03 sudo cat                                          | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC | 15 Mar 24 06:15 UTC |
	|         | /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-866665 node stop m02 -v=7                                                     | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-866665 node start m02 -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665 -v=7                                                           | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-866665 -v=7                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true -v=7                                                    | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-866665                                                                | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC |                     |
	| node    | ha-866665 node delete m03 -v=7                                                   | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:24 UTC | 15 Mar 24 06:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-866665 stop -v=7                                                              | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-866665 --wait=true                                                         | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:28 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-866665                                                                 | ha-866665 | jenkins | v1.32.0 | 15 Mar 24 06:35 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:28:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:28:00.069231   33437 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:28:00.069368   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069377   33437 out.go:304] Setting ErrFile to fd 2...
	I0315 06:28:00.069382   33437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:28:00.069568   33437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:28:00.070093   33437 out.go:298] Setting JSON to false
	I0315 06:28:00.070988   33437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4176,"bootTime":1710479904,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:28:00.071057   33437 start.go:139] virtualization: kvm guest
	I0315 06:28:00.074620   33437 out.go:177] * [ha-866665] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:28:00.076308   33437 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:28:00.076319   33437 notify.go:220] Checking for updates...
	I0315 06:28:00.079197   33437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:28:00.080588   33437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:28:00.081864   33437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:28:00.083324   33437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:28:00.084651   33437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:28:00.086650   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.087036   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.087091   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.102114   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0315 06:28:00.102558   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.103095   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.103124   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.103438   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.103601   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.103876   33437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:28:00.104159   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.104210   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.119133   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0315 06:28:00.119585   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.120070   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.120090   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.120437   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.120651   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.156291   33437 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:28:00.157886   33437 start.go:297] selected driver: kvm2
	I0315 06:28:00.157902   33437 start.go:901] validating driver "kvm2" against &{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.158040   33437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:28:00.158357   33437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.158422   33437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:28:00.174458   33437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:28:00.175133   33437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:28:00.175191   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:28:00.175203   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:28:00.175251   33437 start.go:340] cluster config:
	{Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:28:00.175362   33437 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:28:00.177468   33437 out.go:177] * Starting "ha-866665" primary control-plane node in "ha-866665" cluster
	I0315 06:28:00.179008   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:28:00.179040   33437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:28:00.179047   33437 cache.go:56] Caching tarball of preloaded images
	I0315 06:28:00.179131   33437 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:28:00.179142   33437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:28:00.179294   33437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/config.json ...
	I0315 06:28:00.179480   33437 start.go:360] acquireMachinesLock for ha-866665: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:28:00.179520   33437 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-866665"
	I0315 06:28:00.179534   33437 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:28:00.179545   33437 fix.go:54] fixHost starting: 
	I0315 06:28:00.179780   33437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:28:00.179810   33437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:28:00.194943   33437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0315 06:28:00.195338   33437 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:28:00.195810   33437 main.go:141] libmachine: Using API Version  1
	I0315 06:28:00.195828   33437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:28:00.196117   33437 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:28:00.196309   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.196495   33437 main.go:141] libmachine: (ha-866665) Calling .GetState
	I0315 06:28:00.198137   33437 fix.go:112] recreateIfNeeded on ha-866665: state=Running err=<nil>
	W0315 06:28:00.198153   33437 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:28:00.200161   33437 out.go:177] * Updating the running kvm2 "ha-866665" VM ...
	I0315 06:28:00.201473   33437 machine.go:94] provisionDockerMachine start ...
	I0315 06:28:00.201503   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:28:00.201694   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.204348   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204777   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.204797   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.204937   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.205101   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205264   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.205376   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.205519   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.205700   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.205711   33437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:28:00.305507   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.305537   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305774   33437 buildroot.go:166] provisioning hostname "ha-866665"
	I0315 06:28:00.305803   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.305989   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.308802   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309169   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.309190   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.309354   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.309553   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.309826   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.310014   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.310190   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.310366   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.310382   33437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-866665 && echo "ha-866665" | sudo tee /etc/hostname
	I0315 06:28:00.429403   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-866665
	
	I0315 06:28:00.429432   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.432235   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432606   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.432644   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.432809   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.432999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433159   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.433289   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.433507   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.433711   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.433736   33437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-866665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-866665/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-866665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:28:00.533992   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:28:00.534024   33437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:28:00.534042   33437 buildroot.go:174] setting up certificates
	I0315 06:28:00.534050   33437 provision.go:84] configureAuth start
	I0315 06:28:00.534059   33437 main.go:141] libmachine: (ha-866665) Calling .GetMachineName
	I0315 06:28:00.534324   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:28:00.536932   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537280   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.537309   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.537403   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.539778   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540170   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.540188   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.540352   33437 provision.go:143] copyHostCerts
	I0315 06:28:00.540374   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540409   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:28:00.540418   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:28:00.540502   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:28:00.540577   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540595   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:28:00.540602   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:28:00.540626   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:28:00.540689   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540712   33437 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:28:00.540721   33437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:28:00.540757   33437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:28:00.540858   33437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.ha-866665 san=[127.0.0.1 192.168.39.78 ha-866665 localhost minikube]
	I0315 06:28:00.727324   33437 provision.go:177] copyRemoteCerts
	I0315 06:28:00.727392   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:28:00.727415   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.730386   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.730795   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.730817   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.731033   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.731269   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.731448   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.731603   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:28:00.811679   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:28:00.811760   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 06:28:00.840244   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:28:00.840325   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:28:00.866687   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:28:00.866766   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:28:00.893745   33437 provision.go:87] duration metric: took 359.681699ms to configureAuth
	I0315 06:28:00.893783   33437 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:28:00.894043   33437 config.go:182] Loaded profile config "ha-866665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:28:00.894134   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:28:00.897023   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897388   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:28:00.897411   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:28:00.897569   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:28:00.897752   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.897920   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:28:00.898052   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:28:00.898189   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:28:00.898433   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:28:00.898471   33437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:29:35.718292   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:29:35.718325   33437 machine.go:97] duration metric: took 1m35.516837024s to provisionDockerMachine
	I0315 06:29:35.718343   33437 start.go:293] postStartSetup for "ha-866665" (driver="kvm2")
	I0315 06:29:35.718359   33437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:29:35.718374   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.718720   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:29:35.718757   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.722200   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722789   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.722838   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.722915   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.723113   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.723278   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.723452   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:35.808948   33437 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:29:35.813922   33437 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:29:35.813958   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:29:35.814035   33437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:29:35.814150   33437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:29:35.814165   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:29:35.814262   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:29:35.825162   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:29:35.853607   33437 start.go:296] duration metric: took 135.248885ms for postStartSetup
	I0315 06:29:35.853656   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:35.853968   33437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 06:29:35.853999   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.857046   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857515   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.857538   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.857740   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.857904   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.858174   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.858327   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	W0315 06:29:35.939552   33437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 06:29:35.939581   33437 fix.go:56] duration metric: took 1m35.76003955s for fixHost
	I0315 06:29:35.939603   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:35.942284   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942621   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:35.942656   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:35.942842   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:35.943040   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943209   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:35.943341   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:35.943527   33437 main.go:141] libmachine: Using SSH client type: native
	I0315 06:29:35.943686   33437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0315 06:29:35.943696   33437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:29:36.045713   33437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710484176.008370872
	
	I0315 06:29:36.045741   33437 fix.go:216] guest clock: 1710484176.008370872
	I0315 06:29:36.045749   33437 fix.go:229] Guest: 2024-03-15 06:29:36.008370872 +0000 UTC Remote: 2024-03-15 06:29:35.939588087 +0000 UTC m=+95.917046644 (delta=68.782785ms)
	I0315 06:29:36.045784   33437 fix.go:200] guest clock delta is within tolerance: 68.782785ms
	I0315 06:29:36.045790   33437 start.go:83] releasing machines lock for "ha-866665", held for 1m35.866260772s
	I0315 06:29:36.045808   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.046095   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:29:36.048748   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049090   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.049125   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.049284   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.049937   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050138   33437 main.go:141] libmachine: (ha-866665) Calling .DriverName
	I0315 06:29:36.050191   33437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:29:36.050244   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.050342   33437 ssh_runner.go:195] Run: cat /version.json
	I0315 06:29:36.050361   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHHostname
	I0315 06:29:36.053057   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053439   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053473   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053529   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.053632   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.053795   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.053957   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.053958   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:29:36.053983   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:29:36.054117   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.054145   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHPort
	I0315 06:29:36.054330   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHKeyPath
	I0315 06:29:36.054470   33437 main.go:141] libmachine: (ha-866665) Calling .GetSSHUsername
	I0315 06:29:36.054647   33437 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/ha-866665/id_rsa Username:docker}
	I0315 06:29:36.174816   33437 ssh_runner.go:195] Run: systemctl --version
	I0315 06:29:36.181715   33437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:29:36.358096   33437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:29:36.367581   33437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:29:36.367659   33437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:29:36.383454   33437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:29:36.383480   33437 start.go:494] detecting cgroup driver to use...
	I0315 06:29:36.383550   33437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:29:36.407514   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:29:36.425757   33437 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:29:36.425807   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:29:36.448161   33437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:29:36.466873   33437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:29:36.634934   33437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:29:36.809139   33437 docker.go:233] disabling docker service ...
	I0315 06:29:36.809211   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:29:36.831715   33437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:29:36.847966   33437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:29:37.006211   33437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:29:37.162186   33437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:29:37.178537   33437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:29:37.200300   33437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:29:37.200368   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.212398   33437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:29:37.212455   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.223908   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.235824   33437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:29:37.247520   33437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:29:37.259008   33437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:29:37.269062   33437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:29:37.281152   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:29:37.434941   33437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:31:11.689384   33437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.254391387s)
	I0315 06:31:11.689430   33437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:31:11.689496   33437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:31:11.698102   33437 start.go:562] Will wait 60s for crictl version
	I0315 06:31:11.698154   33437 ssh_runner.go:195] Run: which crictl
	I0315 06:31:11.702605   33437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:31:11.746302   33437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:31:11.746373   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.777004   33437 ssh_runner.go:195] Run: crio --version
	I0315 06:31:11.813410   33437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:31:11.815123   33437 main.go:141] libmachine: (ha-866665) Calling .GetIP
	I0315 06:31:11.818257   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818696   33437 main.go:141] libmachine: (ha-866665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:55:9d", ip: ""} in network mk-ha-866665: {Iface:virbr1 ExpiryTime:2024-03-15 07:10:36 +0000 UTC Type:0 Mac:52:54:00:96:55:9d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-866665 Clientid:01:52:54:00:96:55:9d}
	I0315 06:31:11.818717   33437 main.go:141] libmachine: (ha-866665) DBG | domain ha-866665 has defined IP address 192.168.39.78 and MAC address 52:54:00:96:55:9d in network mk-ha-866665
	I0315 06:31:11.818982   33437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:31:11.824253   33437 kubeadm.go:877] updating cluster {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:31:11.824379   33437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:31:11.824419   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.877400   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.877423   33437 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:31:11.877466   33437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:31:11.913358   33437 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:31:11.913383   33437 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:31:11.913393   33437 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.28.4 crio true true} ...
	I0315 06:31:11.913524   33437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-866665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:31:11.913604   33437 ssh_runner.go:195] Run: crio config
	I0315 06:31:11.961648   33437 cni.go:84] Creating CNI manager for ""
	I0315 06:31:11.961666   33437 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:31:11.961674   33437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:31:11.961692   33437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-866665 NodeName:ha-866665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:31:11.961854   33437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-866665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:31:11.961877   33437 kube-vip.go:111] generating kube-vip config ...
	I0315 06:31:11.961925   33437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 06:31:11.974783   33437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 06:31:11.974879   33437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 06:31:11.974928   33437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:31:11.985708   33437 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:31:11.985779   33437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 06:31:11.996849   33437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 06:31:12.015133   33437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:31:12.032728   33437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 06:31:12.050473   33437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 06:31:12.071279   33437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 06:31:12.075748   33437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:31:12.239653   33437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:31:12.288563   33437 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665 for IP: 192.168.39.78
	I0315 06:31:12.288592   33437 certs.go:194] generating shared ca certs ...
	I0315 06:31:12.288612   33437 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.288830   33437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:31:12.288895   33437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:31:12.288911   33437 certs.go:256] generating profile certs ...
	I0315 06:31:12.289016   33437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/client.key
	I0315 06:31:12.289054   33437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211
	I0315 06:31:12.289075   33437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78 192.168.39.27 192.168.39.254]
	I0315 06:31:12.459406   33437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 ...
	I0315 06:31:12.459436   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211: {Name:mkce2140c17c76a43eac310ec6de314aee20f623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459623   33437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 ...
	I0315 06:31:12.459635   33437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211: {Name:mk03c3e33d6e2b84dc52dfa74e4afefa164f8f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:31:12.459705   33437 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt
	I0315 06:31:12.459841   33437 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key.39db9211 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key
	I0315 06:31:12.459958   33437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key
	I0315 06:31:12.459972   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:31:12.459985   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:31:12.459995   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:31:12.460005   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:31:12.460016   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:31:12.460025   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:31:12.460039   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:31:12.460056   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:31:12.460105   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:31:12.460142   33437 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:31:12.460148   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:31:12.460168   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:31:12.460186   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:31:12.460202   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:31:12.460236   33437 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:31:12.460264   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:12.460276   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:31:12.460287   33437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:31:12.460824   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:31:12.560429   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:31:12.636236   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:31:12.785307   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:31:12.994552   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 06:31:13.094315   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:31:13.148757   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:31:13.186255   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/ha-866665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 06:31:13.219468   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:31:13.298432   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:31:13.356395   33437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:31:13.388275   33437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:31:13.416436   33437 ssh_runner.go:195] Run: openssl version
	I0315 06:31:13.425310   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:31:13.441059   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446108   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.446176   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:31:13.452449   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:31:13.463777   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:31:13.475540   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480658   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.480725   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:31:13.490764   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:31:13.507483   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:31:13.522259   33437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527387   33437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.527450   33437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:31:13.535340   33437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:31:13.547679   33437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:31:13.555390   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:31:13.561596   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:31:13.569097   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:31:13.575180   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:31:13.581211   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:31:13.589108   33437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:31:13.597196   33437 kubeadm.go:391] StartCluster: {Name:ha-866665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-866665 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.184 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:31:13.597364   33437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:31:13.597432   33437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:31:13.654360   33437 cri.go:89] found id: "a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703"
	I0315 06:31:13.654388   33437 cri.go:89] found id: "f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a"
	I0315 06:31:13.654393   33437 cri.go:89] found id: "4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2"
	I0315 06:31:13.654401   33437 cri.go:89] found id: "53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869"
	I0315 06:31:13.654406   33437 cri.go:89] found id: "bdde9d3309aa653bdde9bc5fb009352128cc082c6210723aabf3090316773af4"
	I0315 06:31:13.654411   33437 cri.go:89] found id: "e09471036e57d5a30f82f0f6b4431734b99e26e0508edbf44bf1bb0772de441a"
	I0315 06:31:13.654414   33437 cri.go:89] found id: "e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596"
	I0315 06:31:13.654418   33437 cri.go:89] found id: "cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7"
	I0315 06:31:13.654422   33437 cri.go:89] found id: "5eaa4a539d19c14760c52937c8f1e1fe12532f9f8bd2064f645d738bcd019bcb"
	I0315 06:31:13.654429   33437 cri.go:89] found id: "a632d3a2baa8574a6dfa2fd14d34d67ff1594473528173c4f3cfd95f4725dcbe"
	I0315 06:31:13.654434   33437 cri.go:89] found id: "e490c56eb4c5d1eed5a5a3d95c47fd84a974b0db29822a120857edf9b3749306"
	I0315 06:31:13.654438   33437 cri.go:89] found id: "c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e"
	I0315 06:31:13.654447   33437 cri.go:89] found id: "950153b4c9efe4316e3c3891bb3ef221780f0fe05967f5dd112e0b11f5c73088"
	I0315 06:31:13.654451   33437 cri.go:89] found id: "a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2"
	I0315 06:31:13.654473   33437 cri.go:89] found id: "20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a"
	I0315 06:31:13.654481   33437 cri.go:89] found id: "002360447d19f48a6c9aceda11e1ac67337580d0ec0bac7f9b75503f387efb0c"
	I0315 06:31:13.654485   33437 cri.go:89] found id: "f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c"
	I0315 06:31:13.654494   33437 cri.go:89] found id: "8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de"
	I0315 06:31:13.654501   33437 cri.go:89] found id: "bede6c7f8912b13b1393e0fdd692ee76e9a0c621b35ebdaae1e0fdbc7d256780"
	I0315 06:31:13.654506   33437 cri.go:89] found id: "c0ecd2e85889290ffc2876db9407cbc49c9213575c5c2edb96fba1d4a13b8c90"
	I0315 06:31:13.654513   33437 cri.go:89] found id: "c07640cff4cedd7200db6dc04a3f4d74a7cb002877f61eb0dd86fb6c5e4c00d0"
	I0315 06:31:13.654517   33437 cri.go:89] found id: "7fcd79ed43f7b877296fc88df84b18b419f9eee806ece61a2fb354f3b078d0c3"
	I0315 06:31:13.654524   33437 cri.go:89] found id: "adc81452470007eeb06e6ad4a10ac0e477a9ed99489bb369e2bb63c888bec435"
	I0315 06:31:13.654528   33437 cri.go:89] found id: ""
	I0315 06:31:13.654578   33437 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.587187518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7817b2d-a7bf-4d4c-86ca-1c59d75c9c8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.587839977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8c0d89649b8e19e72c5ead7c0a7c170f580163e03561a7aaa2a959ddd5d2fdd,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710484585558070451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
6e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f
70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2c
c69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907c
d3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7817b2d-a7bf-4d4c-86ca-1c59d75c9c8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 conmon[8902]: conmon 66b0d5dc395b75031ce4 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Mar 15 06:36:36 ha-866665 conmon[8902]: conmon 66b0d5dc395b75031ce4 <ndebug>: terminal_ctrl_fd: 12
	Mar 15 06:36:36 ha-866665 conmon[8902]: conmon 66b0d5dc395b75031ce4 <ndebug>: winsz read side: 16, winsz write side: 16
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.660668353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2d25c36-317f-4f3b-bada-65b7a9f24390 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.660767646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2d25c36-317f-4f3b-bada-65b7a9f24390 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.661818643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aec6472d-1520-4a1a-bc53-cf0e955505a7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.662604452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484596662576328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aec6472d-1520-4a1a-bc53-cf0e955505a7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.663201163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b15300e-e62c-4fc1-8866-2fec3fd1de77 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.663309727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b15300e-e62c-4fc1-8866-2fec3fd1de77 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.663758384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8c0d89649b8e19e72c5ead7c0a7c170f580163e03561a7aaa2a959ddd5d2fdd,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710484585558070451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
6e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f
70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2c
c69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907c
d3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b15300e-e62c-4fc1-8866-2fec3fd1de77 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.723701880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45bcca8a-84ac-47d6-91fd-2a924ed75ebd name=/runtime.v1.RuntimeService/Version
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.723800334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45bcca8a-84ac-47d6-91fd-2a924ed75ebd name=/runtime.v1.RuntimeService/Version
	Mar 15 06:36:36 ha-866665 conmon[8902]: conmon 66b0d5dc395b75031ce4 <ndebug>: container PID: 8915
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.728523213Z" level=debug msg="Received container pid: 8915" file="oci/runtime_oci.go:284" id=8758da59-624d-4b47-a0f0-7e0506153122 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.729145144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2eff3d21-99a2-455c-9882-9669ed77ba48 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.730061818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710484596730023415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eff3d21-99a2-455c-9882-9669ed77ba48 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.731039608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=785f7df2-83ba-4395-b55b-0d4c9768ec25 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.731112716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=785f7df2-83ba-4395-b55b-0d4c9768ec25 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.731837708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e8c0d89649b8e19e72c5ead7c0a7c170f580163e03561a7aaa2a959ddd5d2fdd,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710484585558070451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd,PodSandboxId:16c817ba7d264b8f7bba3ab1d1ff3074904f7f53258453dc326eaf7fab00203f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484440561853992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c36bd8aefb52162dc1bcef34f4bb25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 5,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568bece5a49f6f0a84fb69937c117029e7dedd90c81050e5a36aef2a90dc985,PodSandboxId:1a1fdba5ec224a16b91ab05b40fb30f9872f97f1e31f57c3bc8180a2d08c9b0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710484306218979337,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2cc69f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54,PodSandboxId:d7a69a3a337af71582ce1051a4f71e864ffa1af63897bfe9d000cccced5c310f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484278390877828,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9nvvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c5333df-bb98-4f27-9197-875a160f4ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e5c72ddc,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5,PodSandboxId:9f2fb2d671096660329cda96fc359a94e105ee6b24bfcbbfca16d6e253ac9735,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710484278257874276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4,PodSandboxId:a0a76006f9e8a838dea8efb6b347aa0d186f4a862312b796d3daefd54f80de2c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484278208014606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11128b3-f84e-4526-992d-56e278c3f7c9,},Annotations:map[string]string{io.kubernetes.container.hash: 136548f5,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8,PodSandboxId:22072600a839b56ec14de1ab04f264d93721072ea15b5f022bfd1fa67aa1a744,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710484278111817672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907cd3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703,PodSandboxId:cf57c2ff9f3b2fd48a06472a048c05a9853e0d41b8f41290305af04765eca0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272859108005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a,PodSandboxId:14af170de4b571898874a48dcf002fe94fb21ebca6d8b0ad7fe8f5a1da7e1810,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710484272801262973,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4632db4347aeb5d329b75f46db342bda137407096006e2ba18e03e6a7ab005a2,PodSandboxId:09c39329a8da2573ad3cbdd895668b0e9a7645b64732383eae88272e9f00894b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:5,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710484272726972441,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869,PodSandboxId:7f2d26260bc132df83346e260059f600807bc71e82449c959512f6772a42dc78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710484272697875290,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
6e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596,PodSandboxId:f7b655acbd70858956764f6cbf8b2cb467f44a747f8c4a0a4fa876282cf4b56a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484012956974785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec32969267e5d443d53332f
70d668161,},Annotations:map[string]string{io.kubernetes.container.hash: 4d761a02,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4635f3b41c27cc574f00181fa43463b7015da1cead64c27648cf9f11a76de7,PodSandboxId:b3fef0e73d7bb1217924a1358207053526d7b678fa6c4f039b71d9805717c222,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:4,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710483971559144395,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affdbe5d0709ec0c8cfe4e796df74130,},Annotations:map[string]strin
g{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b44fa63b7f7d4b61990c88d7da99f2d4fb738670e3e6e22354b5a5f3a0c29d,PodSandboxId:344f655b58490a4f777d55dd2c551b578c705e7b4a71a6577daa53c114bbc4a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710483786887352054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-82knb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c12d72ab-189f-4a4a-a7df-54e10184a9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7b2c
c69f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2,PodSandboxId:c28c4fa9bcc72d9300f656805a3e97c6dcdddab3671e12d0480e6e96be9c6391,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710483753567192605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sbxgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fac82d-5f3a-42b8-99b7-1f4ee45c0f98,},Annotations:map[string]string{io.kubernetes.container.hash: 2d71c9ea,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e,PodSandboxId:268a2a5390630a207b2375a2133c6867d83efaacce2f2dcb1508f1a9aaf65799,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483753601335090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r57px,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7f8905-5519-453d-a9a3-26b5d511f1c3,},Annotations:map[string]string{io.kubernetes.container.hash: 579e5097,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c,PodSandboxId:8899126815da59ab432c51df4ada6cd869277719ab23e03fadd977ebca310687,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710483753499514985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-866665,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 06e4fc01158af0abcc722b76c4fcbaee,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a,PodSandboxId:79337bac30908f0c1e437fa581563a038ff9e4cfd59fdd43f1d02020f0cac259,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710483753556829494,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-866665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72b65491f907c
d3dd68444500227632e,},Annotations:map[string]string{io.kubernetes.container.hash: ad0ffc58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de,PodSandboxId:a1032069bcb086a4d107a66ed66790e1c54fe03538931ef9a0f95200989338b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710483748541541438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mgthb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6498160a-372c-4273-a82c-b06c4b7b239b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 1b336595,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=785f7df2-83ba-4395-b55b-0d4c9768ec25 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.750576712Z" level=info msg="Created container 66b0d5dc395b75031ce4e5aa249bfe01fc395f745b9b90bfef6c7a85ef10ca1d: kube-system/kindnet-9nvvx/kindnet-cni" file="server/container_create.go:491" id=8758da59-624d-4b47-a0f0-7e0506153122 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.750682159Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:66b0d5dc395b75031ce4e5aa249bfe01fc395f745b9b90bfef6c7a85ef10ca1d,}" file="otel-collector/interceptors.go:74" id=8758da59-624d-4b47-a0f0-7e0506153122 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.752064414Z" level=debug msg="Request: &StartContainerRequest{ContainerId:66b0d5dc395b75031ce4e5aa249bfe01fc395f745b9b90bfef6c7a85ef10ca1d,}" file="otel-collector/interceptors.go:62" id=19e42b15-c4aa-4da3-9761-8b4d1cc08821 name=/runtime.v1.RuntimeService/StartContainer
	Mar 15 06:36:36 ha-866665 crio[6969]: time="2024-03-15 06:36:36.752141493Z" level=info msg="Starting container: 66b0d5dc395b75031ce4e5aa249bfe01fc395f745b9b90bfef6c7a85ef10ca1d" file="server/container_start.go:21" id=19e42b15-c4aa-4da3-9761-8b4d1cc08821 name=/runtime.v1.RuntimeService/StartContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	66b0d5dc395b7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   Less than a second ago   Running             kindnet-cni               6                   d7a69a3a337af       kindnet-9nvvx
	e8c0d89649b8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago           Running             storage-provisioner       7                   a0a76006f9e8a       storage-provisioner
	8f4c9a0644c94       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   2 minutes ago            Exited              kube-controller-manager   5                   16c817ba7d264       kube-controller-manager-ha-866665
	8568bece5a49f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   4 minutes ago            Running             busybox                   2                   1a1fdba5ec224       busybox-5b5d89c9d6-82knb
	10518fb395cce       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   5 minutes ago            Exited              kindnet-cni               5                   d7a69a3a337af       kindnet-9nvvx
	fd937a8a91dec       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   5 minutes ago            Running             kube-proxy                2                   9f2fb2d671096       kube-proxy-sbxgg
	f31b9e9704e22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago            Exited              storage-provisioner       6                   a0a76006f9e8a       storage-provisioner
	959c3adf756ac       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   5 minutes ago            Running             etcd                      2                   22072600a839b       etcd-ha-866665
	a3b8244e29a11       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   5 minutes ago            Running             coredns                   2                   cf57c2ff9f3b2       coredns-5dd5756b68-r57px
	f8f276ed61ae4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   5 minutes ago            Running             coredns                   2                   14af170de4b57       coredns-5dd5756b68-mgthb
	4632db4347aeb       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   5 minutes ago            Running             kube-vip                  5                   09c39329a8da2       kube-vip-ha-866665
	53858589abe09       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   5 minutes ago            Running             kube-scheduler            2                   7f2d26260bc13       kube-scheduler-ha-866665
	e4370ef8479c8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago            Exited              kube-apiserver            4                   f7b655acbd708       kube-apiserver-ha-866665
	cb4635f3b41c2       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   10 minutes ago           Exited              kube-vip                  4                   b3fef0e73d7bb       kube-vip-ha-866665
	d0b44fa63b7f7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   13 minutes ago           Exited              busybox                   1                   344f655b58490       busybox-5b5d89c9d6-82knb
	c7b60409e5d22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago           Exited              coredns                   1                   268a2a5390630       coredns-5dd5756b68-r57px
	a72cadbbcef74       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago           Exited              kube-proxy                1                   c28c4fa9bcc72       kube-proxy-sbxgg
	20a813e0950a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago           Exited              etcd                      1                   79337bac30908       etcd-ha-866665
	f1c6cdc2511cf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago           Exited              kube-scheduler            1                   8899126815da5       kube-scheduler-ha-866665
	8507c4a363e13       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago           Exited              coredns                   1                   a1032069bcb08       coredns-5dd5756b68-mgthb
	
	
	==> coredns [8507c4a363e13f76515845ccd38eb4fda5ddfbd17bb8eb5a14e138f2607603de] <==
	Trace[800224354]: ---"Objects listed" error:Unauthorized 12252ms (06:27:21.915)
	Trace[800224354]: [12.252932189s] [12.252932189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[532336764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:10.595) (total time: 11322ms):
	Trace[532336764]: ---"Objects listed" error:Unauthorized 11321ms (06:27:21.916)
	Trace[532336764]: [11.322096854s] [11.322096854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1149679676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:25.734) (total time: 10173ms):
	Trace[1149679676]: ---"Objects listed" error:Unauthorized 10171ms (06:27:35.906)
	Trace[1149679676]: [10.173827374s] [10.173827374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3b8244e29a11ef4a5d7f6bc11533ad0fbe55bc470717991ed5d1537f8c04703] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40115 - 43756 "HINFO IN 2717951387138798829.7180821467390164679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010608616s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	
	
	==> coredns [c7b60409e5d22324ba312edc6c4f5e94d3a1b00f5d9f7dc2922756f23a23f31e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[652911396]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.712) (total time: 11205ms):
	Trace[652911396]: ---"Objects listed" error:Unauthorized 11205ms (06:27:35.918)
	Trace[652911396]: [11.205453768s] [11.205453768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[659281961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (15-Mar-2024 06:27:24.824) (total time: 11093ms):
	Trace[659281961]: ---"Objects listed" error:Unauthorized 11093ms (06:27:35.918)
	Trace[659281961]: [11.093964434s] [11.093964434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2450": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f8f276ed61ae487693a7e0d2b2f9aa737ceb4003ccb92006a179a894db6abb8a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47355 - 18813 "HINFO IN 6697230244971142980.3977120107732033871. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009991661s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.082898] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.720187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.465799] kauditd_printk_skb: 38 callbacks suppressed
	[Mar15 06:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar15 06:22] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.152680] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.183958] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159389] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +0.267565] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +5.718504] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +0.088299] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.252692] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.603346] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 06:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.000898] kauditd_printk_skb: 4 callbacks suppressed
	[Mar15 06:29] systemd-fstab-generator[6878]: Ignoring "noauto" option for root device
	[  +0.182963] systemd-fstab-generator[6890]: Ignoring "noauto" option for root device
	[  +0.207318] systemd-fstab-generator[6904]: Ignoring "noauto" option for root device
	[  +0.160914] systemd-fstab-generator[6916]: Ignoring "noauto" option for root device
	[  +0.264473] systemd-fstab-generator[6940]: Ignoring "noauto" option for root device
	[Mar15 06:31] systemd-fstab-generator[7068]: Ignoring "noauto" option for root device
	[  +0.099499] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.728671] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.581573] kauditd_printk_skb: 34 callbacks suppressed
	[Mar15 06:32] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [20a813e0950a0ee7b0095bc9f0aeae8c6e0eeac53471a0be98dbeaf701f36b8a] <==
	{"level":"info","ts":"2024-03-15T06:27:58.935705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:27:58.93572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"warn","ts":"2024-03-15T06:27:59.520795Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"af74041eca695613","rtt":"9.002373ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"warn","ts":"2024-03-15T06:27:59.521075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"af74041eca695613","rtt":"1.272856ms","error":"dial tcp 192.168.39.27:2380: i/o timeout"}
	{"level":"info","ts":"2024-03-15T06:28:00.636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.636133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 [logterm: 4, index: 2886] sent MsgPreVote request to af74041eca695613 at term 4"}
	{"level":"info","ts":"2024-03-15T06:28:00.991925Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T06:28:00.992032Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-03-15T06:28:00.992202Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.992285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994802Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:28:00.994827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:28:00.994881Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"83fde65c75733ea3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T06:28:00.99506Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995086Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995114Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995281Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995331Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995423Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:00.995436Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af74041eca695613"}
	{"level":"info","ts":"2024-03-15T06:28:01.013373Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.013771Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-03-15T06:28:01.01388Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-866665","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> etcd [959c3adf756ac670f71e9a23a6963729f6c9b1a1866a5044f8cc579dc7a28be8] <==
	{"level":"info","ts":"2024-03-15T06:32:00.277708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-03-15T06:35:55.375896Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"af74041eca695613","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"16.54764ms"}
	{"level":"warn","ts":"2024-03-15T06:36:25.522569Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"af74041eca695613","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"60.027423ms"}
	{"level":"warn","ts":"2024-03-15T06:36:25.934819Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"af74041eca695613","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"75.267704ms"}
	{"level":"warn","ts":"2024-03-15T06:36:25.935954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.371055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6202457520828427260 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:56138e40cf6dabfb>","response":"size:40"}
	{"level":"warn","ts":"2024-03-15T06:36:30.679945Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.188:44864","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-15T06:36:32.942768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 switched to configuration voters=(9511011272858222243 12642734584227255827 18109091707333818674)"}
	{"level":"info","ts":"2024-03-15T06:36:32.948083Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","added-peer-id":"fb506af6349ab932","added-peer-peer-urls":["https://192.168.39.188:2380"]}
	{"level":"info","ts":"2024-03-15T06:36:32.948183Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.948344Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.949661Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.949963Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.950314Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.950416Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.950458Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:32.951415Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932","remote-peer-urls":["https://192.168.39.188:2380"]}
	{"level":"info","ts":"2024-03-15T06:36:34.024865Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"fb506af6349ab932","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T06:36:34.024941Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:34.024971Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:34.027836Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:34.031042Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"info","ts":"2024-03-15T06:36:34.05846Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"83fde65c75733ea3","to":"fb506af6349ab932","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T06:36:34.05853Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"83fde65c75733ea3","remote-peer-id":"fb506af6349ab932"}
	{"level":"warn","ts":"2024-03-15T06:36:34.060493Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.188:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-15T06:36:34.060629Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.188:41968","server-name":"","error":"read tcp 192.168.39.78:2380->192.168.39.188:41968: read: connection reset by peer"}
	
	
	==> kernel <==
	 06:36:37 up 26 min,  0 users,  load average: 0.11, 0.17, 0.26
	Linux ha-866665 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54] <==
	I0315 06:31:18.825608       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 06:31:19.125113       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:19.440688       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:22.512472       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 06:31:24.514025       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 06:31:27.514923       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [e4370ef8479c8071e7f6e97ceb9378aad8d67434c8fb0c47b8d0fc9bda7f3596] <==
	W0315 06:27:50.003349       1 reflector.go:535] storage/cacher.go:/volumeattachments: failed to list *storage.VolumeAttachment: etcdserver: request timed out
	I0315 06:27:50.003365       1 trace.go:236] Trace[731904531]: "Reflector ListAndWatch" name:storage/cacher.go:/volumeattachments (15-Mar-2024 06:27:36.912) (total time: 13090ms):
	Trace[731904531]: ---"Objects listed" error:etcdserver: request timed out 13090ms (06:27:50.003)
	Trace[731904531]: [13.090756124s] [13.090756124s] END
	E0315 06:27:50.003369       1 cacher.go:470] cacher (volumeattachments.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.VolumeAttachment: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003380       1 reflector.go:535] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	I0315 06:27:50.003393       1 trace.go:236] Trace[592093521]: "Reflector ListAndWatch" name:storage/cacher.go:/prioritylevelconfigurations (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[592093521]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[592093521]: [13.09928422s] [13.09928422s] END
	E0315 06:27:50.003397       1 cacher.go:470] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003439       1 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	I0315 06:27:50.003456       1 trace.go:236] Trace[746771974]: "Reflector ListAndWatch" name:storage/cacher.go:/poddisruptionbudgets (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[746771974]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[746771974]: [13.099292505s] [13.099292505s] END
	E0315 06:27:50.003482       1 cacher.go:470] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003501       1 reflector.go:535] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out
	I0315 06:27:50.003534       1 trace.go:236] Trace[1529918640]: "Reflector ListAndWatch" name:storage/cacher.go:/flowschemas (15-Mar-2024 06:27:36.900) (total time: 13103ms):
	Trace[1529918640]: ---"Objects listed" error:etcdserver: request timed out 13103ms (06:27:50.003)
	Trace[1529918640]: [13.10350995s] [13.10350995s] END
	E0315 06:27:50.003539       1 cacher.go:470] cacher (flowschemas.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out; reinitializing...
	W0315 06:27:50.003551       1 reflector.go:535] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	I0315 06:27:50.003567       1 trace.go:236] Trace[1142995160]: "Reflector ListAndWatch" name:storage/cacher.go:/serviceaccounts (15-Mar-2024 06:27:36.904) (total time: 13099ms):
	Trace[1142995160]: ---"Objects listed" error:etcdserver: request timed out 13099ms (06:27:50.003)
	Trace[1142995160]: [13.099504673s] [13.099504673s] END
	E0315 06:27:50.003590       1 cacher.go:470] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd] <==
	I0315 06:34:01.115770       1 serving.go:348] Generated self-signed cert in-memory
	I0315 06:34:02.015947       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 06:34:02.015993       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:34:02.025496       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 06:34:02.025650       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:34:02.026071       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:34:02.026580       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0315 06:34:12.027833       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.78:8443/healthz\": dial tcp 192.168.39.78:8443: connect: connection refused"
	
	
	==> kube-proxy [a72cadbbcef74cbfe983b247bf865ecdf46f6ca8526deeb1657b9f137651e3f2] <==
	E0315 06:26:01.167720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:01.167669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:01.167846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:04.241012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:04.241169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.456306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.456506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:13.457153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:13.457208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:16.532382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:16.532484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:34.959861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:34.959939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:26:38.032844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:26:38.032946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:14.897385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:14.897592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2267": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:17.967871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:17.968263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:21.039682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:21.039794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2435": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:27:54.832570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:27:54.832649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&resourceVersion=2239": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [fd937a8a91decc1f378cd56d4068b095fdceb3673f429ed113e2508c22d3d4a5] <==
	E0315 06:31:42.162745       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:03.665169       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-866665": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:03.665837       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0315 06:32:03.704165       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:32:03.704332       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:32:03.707769       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:32:03.707920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:32:03.709020       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:32:03.709101       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:32:03.711683       1 config.go:188] "Starting service config controller"
	I0315 06:32:03.711754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:32:03.711789       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:32:03.711825       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:32:03.712712       1 config.go:315] "Starting node config controller"
	I0315 06:32:03.712754       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0315 06:32:06.736537       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0315 06:32:06.736941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-866665&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 06:32:06.737612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 06:32:06.737668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 06:32:07.612784       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:32:08.012336       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:32:08.013310       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [53858589abe0969b0d43fd253c11bdba2cc0c2966139f79c07bbf78cffdcd869] <==
	E0315 06:35:55.028769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.78:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:01.273368       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:01.273446       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.78:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:03.659692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:03.659843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:05.741166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:05.741310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:08.417492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:08.417550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.78:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:09.312325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:09.312406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.78:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:11.510887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:11.510978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:14.066727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.78:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:14.066760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.78:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:17.837850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:17.837919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:22.527855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:22.527921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.78:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:24.483922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:24.484011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.78:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:24.535426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.78:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:24.535484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.78:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:36:35.087638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:36:35.087710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.78:8443: connect: connection refused
	
	
	==> kube-scheduler [f1c6cdc2511cf629dab56a84f54e1f95515f47a5fe7d810e843ab351d4c4db1c] <==
	W0315 06:27:33.721173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:27:33.721276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:27:34.581491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:34.581544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:36.411769       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:27:36.411881       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:27:36.473470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:27:36.473532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:27:37.175018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 06:27:37.175090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 06:27:38.621446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:27:38.621559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:27:39.985765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:39.985857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:41.948412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:27:41.948471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:27:58.053579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.053849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.78:8443/apis/apps/v1/replicasets?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:27:58.945885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:27:58.945942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.78:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	W0315 06:28:00.884506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	E0315 06:28:00.884572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.78:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=2450": dial tcp 192.168.39.78:8443: connect: connection refused
	I0315 06:28:00.994442       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:28:00.994493       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:28:00.994655       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 06:36:05 ha-866665 kubelet[1369]: E0315 06:36:05.567768    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:36:05 ha-866665 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:36:05 ha-866665 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:36:05 ha-866665 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:36:05 ha-866665 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:36:07 ha-866665 kubelet[1369]: I0315 06:36:07.544184    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:36:07 ha-866665 kubelet[1369]: E0315 06:36:07.544734    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:36:12 ha-866665 kubelet[1369]: I0315 06:36:12.544203    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:36:12 ha-866665 kubelet[1369]: E0315 06:36:12.544517    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b11128b3-f84e-4526-992d-56e278c3f7c9)\"" pod="kube-system/storage-provisioner" podUID="b11128b3-f84e-4526-992d-56e278c3f7c9"
	Mar 15 06:36:15 ha-866665 kubelet[1369]: I0315 06:36:15.546089    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:36:15 ha-866665 kubelet[1369]: E0315 06:36:15.548326    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	Mar 15 06:36:16 ha-866665 kubelet[1369]: E0315 06:36:16.554479    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:36:16 ha-866665 kubelet[1369]: E0315 06:36:16.554713    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:36:16 ha-866665 kubelet[1369]: E0315 06:36:16.554823    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:36:16 ha-866665 kubelet[1369]: E0315 06:36:16.555146    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:36:21 ha-866665 kubelet[1369]: I0315 06:36:21.544840    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	Mar 15 06:36:21 ha-866665 kubelet[1369]: E0315 06:36:21.546977    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-9nvvx_kube-system(4c5333df-bb98-4f27-9197-875a160f4ff6)\"" pod="kube-system/kindnet-9nvvx" podUID="4c5333df-bb98-4f27-9197-875a160f4ff6"
	Mar 15 06:36:25 ha-866665 kubelet[1369]: I0315 06:36:25.544157    1369 scope.go:117] "RemoveContainer" containerID="f31b9e9704e22843a1d46c9994b8fad60de501458b0915ca32d41c10c0f1bde4"
	Mar 15 06:36:29 ha-866665 kubelet[1369]: I0315 06:36:29.544337    1369 scope.go:117] "RemoveContainer" containerID="8f4c9a0644c94c4301a7d96a179f3c1225d5f24d9c926c09380f1e111322dfdd"
	Mar 15 06:36:29 ha-866665 kubelet[1369]: E0315 06:36:29.545097    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-866665_kube-system(c0c36bd8aefb52162dc1bcef34f4bb25)\"" pod="kube-system/kube-controller-manager-ha-866665" podUID="c0c36bd8aefb52162dc1bcef34f4bb25"
	Mar 15 06:36:31 ha-866665 kubelet[1369]: E0315 06:36:31.555210    1369 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists"
	Mar 15 06:36:31 ha-866665 kubelet[1369]: E0315 06:36:31.555301    1369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:36:31 ha-866665 kubelet[1369]: E0315 06:36:31.555319    1369 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\" already exists" pod="kube-system/kube-apiserver-ha-866665"
	Mar 15 06:36:31 ha-866665 kubelet[1369]: E0315 06:36:31.555370    1369 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ha-866665_kube-system(ec32969267e5d443d53332f70d668161)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-apiserver-ha-866665_kube-system_ec32969267e5d443d53332f70d668161_2\\\" already exists\"" pod="kube-system/kube-apiserver-ha-866665" podUID="ec32969267e5d443d53332f70d668161"
	Mar 15 06:36:36 ha-866665 kubelet[1369]: I0315 06:36:36.544344    1369 scope.go:117] "RemoveContainer" containerID="10518fb395ccef0d12367b01f3a67613f203a91a536807a5036c49103719fc54"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:36:36.168325   35316 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-866665 -n ha-866665: exit status 2 (257.56239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-866665" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (58.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-763469
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-763469
E0315 06:44:58.534740   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-763469: exit status 82 (2m2.023745313s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-763469-m03"  ...
	* Stopping node "multinode-763469-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-763469" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-763469 --wait=true -v=8 --alsologtostderr
E0315 06:47:24.121178   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:49:21.071539   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-763469 --wait=true -v=8 --alsologtostderr: (3m2.203008015s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-763469
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-763469 -n multinode-763469
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-763469 logs -n 25: (1.710504712s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469:/home/docker/cp-test_multinode-763469-m02_multinode-763469.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469 sudo cat                                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m02_multinode-763469.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03:/home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469-m03 sudo cat                                   | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp testdata/cp-test.txt                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469:/home/docker/cp-test_multinode-763469-m03_multinode-763469.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469 sudo cat                                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02:/home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469-m02 sudo cat                                   | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-763469 node stop m03                                                          | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	| node    | multinode-763469 node start                                                             | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| stop    | -p multinode-763469                                                                     | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| start   | -p multinode-763469                                                                     | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:46 UTC | 15 Mar 24 06:49 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:46:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:46:39.708445   41675 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:46:39.708763   41675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:46:39.708778   41675 out.go:304] Setting ErrFile to fd 2...
	I0315 06:46:39.708785   41675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:46:39.709289   41675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:46:39.710238   41675 out.go:298] Setting JSON to false
	I0315 06:46:39.711206   41675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5296,"bootTime":1710479904,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:46:39.711271   41675 start.go:139] virtualization: kvm guest
	I0315 06:46:39.713484   41675 out.go:177] * [multinode-763469] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:46:39.715298   41675 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:46:39.715305   41675 notify.go:220] Checking for updates...
	I0315 06:46:39.717030   41675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:46:39.718720   41675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:46:39.720226   41675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:46:39.721746   41675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:46:39.723228   41675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:46:39.725137   41675 config.go:182] Loaded profile config "multinode-763469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:46:39.725239   41675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:46:39.725609   41675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:46:39.725652   41675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:46:39.740916   41675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0315 06:46:39.741346   41675 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:46:39.741911   41675 main.go:141] libmachine: Using API Version  1
	I0315 06:46:39.741931   41675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:46:39.742267   41675 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:46:39.742432   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.776960   41675 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:46:39.778333   41675 start.go:297] selected driver: kvm2
	I0315 06:46:39.778358   41675 start.go:901] validating driver "kvm2" against &{Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:46:39.778487   41675 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:46:39.778805   41675 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:46:39.778886   41675 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:46:39.793784   41675 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:46:39.794415   41675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:46:39.794473   41675 cni.go:84] Creating CNI manager for ""
	I0315 06:46:39.794484   41675 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:46:39.794551   41675 start.go:340] cluster config:
	{Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:46:39.794667   41675 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:46:39.796479   41675 out.go:177] * Starting "multinode-763469" primary control-plane node in "multinode-763469" cluster
	I0315 06:46:39.798062   41675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:46:39.798118   41675 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:46:39.798128   41675 cache.go:56] Caching tarball of preloaded images
	I0315 06:46:39.798231   41675 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:46:39.798247   41675 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:46:39.798384   41675 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/config.json ...
	I0315 06:46:39.798595   41675 start.go:360] acquireMachinesLock for multinode-763469: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:46:39.798639   41675 start.go:364] duration metric: took 24.438µs to acquireMachinesLock for "multinode-763469"
	I0315 06:46:39.798657   41675 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:46:39.798666   41675 fix.go:54] fixHost starting: 
	I0315 06:46:39.798909   41675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:46:39.798941   41675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:46:39.813233   41675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0315 06:46:39.813646   41675 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:46:39.814074   41675 main.go:141] libmachine: Using API Version  1
	I0315 06:46:39.814105   41675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:46:39.814400   41675 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:46:39.814584   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.814743   41675 main.go:141] libmachine: (multinode-763469) Calling .GetState
	I0315 06:46:39.816338   41675 fix.go:112] recreateIfNeeded on multinode-763469: state=Running err=<nil>
	W0315 06:46:39.816359   41675 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:46:39.818410   41675 out.go:177] * Updating the running kvm2 "multinode-763469" VM ...
	I0315 06:46:39.819881   41675 machine.go:94] provisionDockerMachine start ...
	I0315 06:46:39.819903   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.820136   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:39.822667   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.823175   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:39.823210   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.823370   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:39.823568   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.823771   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.823929   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:39.824089   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:39.824275   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:39.824286   41675 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:46:39.946363   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-763469
	
	I0315 06:46:39.946392   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:39.946648   41675 buildroot.go:166] provisioning hostname "multinode-763469"
	I0315 06:46:39.946679   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:39.946944   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:39.950032   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.950498   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:39.950529   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.950804   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:39.951065   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.951260   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.951405   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:39.951618   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:39.951822   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:39.951837   41675 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-763469 && echo "multinode-763469" | sudo tee /etc/hostname
	I0315 06:46:40.082466   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-763469
	
	I0315 06:46:40.082499   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.085472   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.085849   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.085874   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.086113   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.086335   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.086523   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.086675   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.086852   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:40.087063   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:40.087093   41675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-763469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-763469/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-763469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:46:40.201668   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:46:40.201698   41675 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:46:40.201714   41675 buildroot.go:174] setting up certificates
	I0315 06:46:40.201723   41675 provision.go:84] configureAuth start
	I0315 06:46:40.201731   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:40.202041   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:46:40.204613   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.205067   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.205100   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.205231   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.207417   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.207823   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.207870   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.207930   41675 provision.go:143] copyHostCerts
	I0315 06:46:40.207969   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:46:40.207997   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:46:40.208005   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:46:40.208082   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:46:40.208161   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:46:40.208177   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:46:40.208184   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:46:40.208208   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:46:40.208260   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:46:40.208276   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:46:40.208282   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:46:40.208302   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:46:40.208356   41675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.multinode-763469 san=[127.0.0.1 192.168.39.29 localhost minikube multinode-763469]
	I0315 06:46:40.297910   41675 provision.go:177] copyRemoteCerts
	I0315 06:46:40.297968   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:46:40.297995   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.300845   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.301257   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.301301   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.301465   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.301668   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.301819   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.301951   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:46:40.391708   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:46:40.391819   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:46:40.420432   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:46:40.420514   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0315 06:46:40.447700   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:46:40.447777   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:46:40.473416   41675 provision.go:87] duration metric: took 271.680903ms to configureAuth
	I0315 06:46:40.473447   41675 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:46:40.473695   41675 config.go:182] Loaded profile config "multinode-763469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:46:40.473763   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.476339   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.476725   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.476767   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.476943   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.477111   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.477277   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.477403   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.477545   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:40.477716   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:40.477730   41675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:48:11.377134   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:48:11.377183   41675 machine.go:97] duration metric: took 1m31.557288028s to provisionDockerMachine
	I0315 06:48:11.377196   41675 start.go:293] postStartSetup for "multinode-763469" (driver="kvm2")
	I0315 06:48:11.377210   41675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:48:11.377240   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.377687   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:48:11.377722   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.380949   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.381428   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.381452   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.381677   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.381891   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.382065   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.382234   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.473207   41675 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:48:11.477776   41675 command_runner.go:130] > NAME=Buildroot
	I0315 06:48:11.477795   41675 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0315 06:48:11.477799   41675 command_runner.go:130] > ID=buildroot
	I0315 06:48:11.477804   41675 command_runner.go:130] > VERSION_ID=2023.02.9
	I0315 06:48:11.477809   41675 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0315 06:48:11.477836   41675 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:48:11.477851   41675 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:48:11.477909   41675 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:48:11.477984   41675 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:48:11.477994   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:48:11.478076   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:48:11.488055   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:48:11.515998   41675 start.go:296] duration metric: took 138.787884ms for postStartSetup
	I0315 06:48:11.516046   41675 fix.go:56] duration metric: took 1m31.717379198s for fixHost
	I0315 06:48:11.516070   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.519119   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.519626   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.519645   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.519961   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.520221   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.520421   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.520587   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.520754   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:48:11.520966   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:48:11.520978   41675 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:48:11.637686   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710485291.615874397
	
	I0315 06:48:11.637707   41675 fix.go:216] guest clock: 1710485291.615874397
	I0315 06:48:11.637714   41675 fix.go:229] Guest: 2024-03-15 06:48:11.615874397 +0000 UTC Remote: 2024-03-15 06:48:11.516051552 +0000 UTC m=+91.852898782 (delta=99.822845ms)
	I0315 06:48:11.637746   41675 fix.go:200] guest clock delta is within tolerance: 99.822845ms
	I0315 06:48:11.637756   41675 start.go:83] releasing machines lock for "multinode-763469", held for 1m31.839106152s
	I0315 06:48:11.637779   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.638041   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:48:11.640800   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.641275   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.641299   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.641470   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.641976   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.642149   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.642268   41675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:48:11.642312   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.642361   41675 ssh_runner.go:195] Run: cat /version.json
	I0315 06:48:11.642384   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.644932   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645198   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645320   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.645356   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645453   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.645572   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.645608   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645622   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.645756   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.645812   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.645959   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.645974   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.646102   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.646239   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.726123   41675 command_runner.go:130] > {"iso_version": "v1.32.1-1710459732-18213", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3cbf09d91ff419d65a5234008c34d4cc95dfc38f"}
	I0315 06:48:11.726470   41675 ssh_runner.go:195] Run: systemctl --version
	I0315 06:48:11.762745   41675 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0315 06:48:11.762795   41675 command_runner.go:130] > systemd 252 (252)
	I0315 06:48:11.762818   41675 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0315 06:48:11.762867   41675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:48:11.925678   41675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 06:48:11.933138   41675 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0315 06:48:11.933197   41675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:48:11.933252   41675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:48:11.943591   41675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:48:11.943616   41675 start.go:494] detecting cgroup driver to use...
	I0315 06:48:11.943729   41675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:48:11.961161   41675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:48:11.976676   41675 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:48:11.976729   41675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:48:11.991562   41675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:48:12.006903   41675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:48:12.162279   41675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:48:12.311667   41675 docker.go:233] disabling docker service ...
	I0315 06:48:12.311725   41675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:48:12.328588   41675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:48:12.344272   41675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:48:12.494952   41675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:48:12.639871   41675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:48:12.654908   41675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:48:12.676978   41675 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0315 06:48:12.677585   41675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:48:12.677672   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.689216   41675 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:48:12.689292   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.700695   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.712763   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.724331   41675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:48:12.736300   41675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:48:12.748623   41675 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0315 06:48:12.748743   41675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:48:12.760212   41675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:48:12.906197   41675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:48:14.060527   41675 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.154291615s)
	I0315 06:48:14.060558   41675 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:48:14.060601   41675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:48:14.065804   41675 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0315 06:48:14.065842   41675 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0315 06:48:14.065849   41675 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0315 06:48:14.065855   41675 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 06:48:14.065860   41675 command_runner.go:130] > Access: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065867   41675 command_runner.go:130] > Modify: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065872   41675 command_runner.go:130] > Change: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065876   41675 command_runner.go:130] >  Birth: -
	I0315 06:48:14.065948   41675 start.go:562] Will wait 60s for crictl version
	I0315 06:48:14.065988   41675 ssh_runner.go:195] Run: which crictl
	I0315 06:48:14.069735   41675 command_runner.go:130] > /usr/bin/crictl
	I0315 06:48:14.069950   41675 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:48:14.114689   41675 command_runner.go:130] > Version:  0.1.0
	I0315 06:48:14.114711   41675 command_runner.go:130] > RuntimeName:  cri-o
	I0315 06:48:14.114716   41675 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0315 06:48:14.114721   41675 command_runner.go:130] > RuntimeApiVersion:  v1
	I0315 06:48:14.114740   41675 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:48:14.114806   41675 ssh_runner.go:195] Run: crio --version
	I0315 06:48:14.145548   41675 command_runner.go:130] > crio version 1.29.1
	I0315 06:48:14.145573   41675 command_runner.go:130] > Version:        1.29.1
	I0315 06:48:14.145581   41675 command_runner.go:130] > GitCommit:      unknown
	I0315 06:48:14.145588   41675 command_runner.go:130] > GitCommitDate:  unknown
	I0315 06:48:14.145594   41675 command_runner.go:130] > GitTreeState:   clean
	I0315 06:48:14.145606   41675 command_runner.go:130] > BuildDate:      2024-03-15T05:02:11Z
	I0315 06:48:14.145610   41675 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 06:48:14.145614   41675 command_runner.go:130] > Compiler:       gc
	I0315 06:48:14.145619   41675 command_runner.go:130] > Platform:       linux/amd64
	I0315 06:48:14.145624   41675 command_runner.go:130] > Linkmode:       dynamic
	I0315 06:48:14.145628   41675 command_runner.go:130] > BuildTags:      
	I0315 06:48:14.145633   41675 command_runner.go:130] >   containers_image_ostree_stub
	I0315 06:48:14.145638   41675 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 06:48:14.145648   41675 command_runner.go:130] >   btrfs_noversion
	I0315 06:48:14.145656   41675 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 06:48:14.145661   41675 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 06:48:14.145666   41675 command_runner.go:130] >   seccomp
	I0315 06:48:14.145670   41675 command_runner.go:130] > LDFlags:          unknown
	I0315 06:48:14.145676   41675 command_runner.go:130] > SeccompEnabled:   true
	I0315 06:48:14.145680   41675 command_runner.go:130] > AppArmorEnabled:  false
	I0315 06:48:14.145748   41675 ssh_runner.go:195] Run: crio --version
	I0315 06:48:14.175809   41675 command_runner.go:130] > crio version 1.29.1
	I0315 06:48:14.175831   41675 command_runner.go:130] > Version:        1.29.1
	I0315 06:48:14.175836   41675 command_runner.go:130] > GitCommit:      unknown
	I0315 06:48:14.175840   41675 command_runner.go:130] > GitCommitDate:  unknown
	I0315 06:48:14.175844   41675 command_runner.go:130] > GitTreeState:   clean
	I0315 06:48:14.175861   41675 command_runner.go:130] > BuildDate:      2024-03-15T05:02:11Z
	I0315 06:48:14.175865   41675 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 06:48:14.175869   41675 command_runner.go:130] > Compiler:       gc
	I0315 06:48:14.175873   41675 command_runner.go:130] > Platform:       linux/amd64
	I0315 06:48:14.175877   41675 command_runner.go:130] > Linkmode:       dynamic
	I0315 06:48:14.175881   41675 command_runner.go:130] > BuildTags:      
	I0315 06:48:14.175885   41675 command_runner.go:130] >   containers_image_ostree_stub
	I0315 06:48:14.175889   41675 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 06:48:14.175893   41675 command_runner.go:130] >   btrfs_noversion
	I0315 06:48:14.175897   41675 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 06:48:14.175902   41675 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 06:48:14.175906   41675 command_runner.go:130] >   seccomp
	I0315 06:48:14.175910   41675 command_runner.go:130] > LDFlags:          unknown
	I0315 06:48:14.175913   41675 command_runner.go:130] > SeccompEnabled:   true
	I0315 06:48:14.175917   41675 command_runner.go:130] > AppArmorEnabled:  false
	I0315 06:48:14.179096   41675 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:48:14.180333   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:48:14.182765   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:14.183168   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:14.183198   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:14.183401   41675 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:48:14.187765   41675 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0315 06:48:14.187855   41675 kubeadm.go:877] updating cluster {Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:48:14.188019   41675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:48:14.188076   41675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:48:14.233784   41675 command_runner.go:130] > {
	I0315 06:48:14.233824   41675 command_runner.go:130] >   "images": [
	I0315 06:48:14.233831   41675 command_runner.go:130] >     {
	I0315 06:48:14.233842   41675 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 06:48:14.233856   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.233865   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 06:48:14.233875   41675 command_runner.go:130] >       ],
	I0315 06:48:14.233882   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.233895   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 06:48:14.233909   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 06:48:14.233915   41675 command_runner.go:130] >       ],
	I0315 06:48:14.233925   41675 command_runner.go:130] >       "size": "65258016",
	I0315 06:48:14.233931   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.233941   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.233948   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.233954   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.233960   41675 command_runner.go:130] >     },
	I0315 06:48:14.233965   41675 command_runner.go:130] >     {
	I0315 06:48:14.233974   41675 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 06:48:14.233982   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.233990   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 06:48:14.234003   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234010   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234020   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 06:48:14.234031   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 06:48:14.234034   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234041   41675 command_runner.go:130] >       "size": "65291810",
	I0315 06:48:14.234046   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234055   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234059   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234063   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234069   41675 command_runner.go:130] >     },
	I0315 06:48:14.234072   41675 command_runner.go:130] >     {
	I0315 06:48:14.234078   41675 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 06:48:14.234083   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234089   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 06:48:14.234095   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234099   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234115   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 06:48:14.234125   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 06:48:14.234129   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234133   41675 command_runner.go:130] >       "size": "1363676",
	I0315 06:48:14.234137   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234144   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234148   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234152   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234156   41675 command_runner.go:130] >     },
	I0315 06:48:14.234161   41675 command_runner.go:130] >     {
	I0315 06:48:14.234167   41675 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 06:48:14.234171   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234177   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 06:48:14.234189   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234198   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234212   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 06:48:14.234240   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 06:48:14.234249   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234256   41675 command_runner.go:130] >       "size": "31470524",
	I0315 06:48:14.234273   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234283   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234289   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234294   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234298   41675 command_runner.go:130] >     },
	I0315 06:48:14.234304   41675 command_runner.go:130] >     {
	I0315 06:48:14.234310   41675 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 06:48:14.234314   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234320   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 06:48:14.234325   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234329   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234337   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 06:48:14.234346   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 06:48:14.234350   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234354   41675 command_runner.go:130] >       "size": "53621675",
	I0315 06:48:14.234360   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234363   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234367   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234373   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234377   41675 command_runner.go:130] >     },
	I0315 06:48:14.234380   41675 command_runner.go:130] >     {
	I0315 06:48:14.234386   41675 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 06:48:14.234392   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234396   41675 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 06:48:14.234399   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234403   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234410   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 06:48:14.234419   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 06:48:14.234425   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234430   41675 command_runner.go:130] >       "size": "295456551",
	I0315 06:48:14.234436   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234440   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234443   41675 command_runner.go:130] >       },
	I0315 06:48:14.234450   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234453   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234460   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234468   41675 command_runner.go:130] >     },
	I0315 06:48:14.234473   41675 command_runner.go:130] >     {
	I0315 06:48:14.234479   41675 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 06:48:14.234483   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234488   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 06:48:14.234494   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234497   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234504   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 06:48:14.234513   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 06:48:14.234517   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234521   41675 command_runner.go:130] >       "size": "127226832",
	I0315 06:48:14.234526   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234530   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234536   41675 command_runner.go:130] >       },
	I0315 06:48:14.234540   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234544   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234551   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234554   41675 command_runner.go:130] >     },
	I0315 06:48:14.234557   41675 command_runner.go:130] >     {
	I0315 06:48:14.234563   41675 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 06:48:14.234569   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234575   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 06:48:14.234580   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234584   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234605   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 06:48:14.234616   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 06:48:14.234620   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234624   41675 command_runner.go:130] >       "size": "123261750",
	I0315 06:48:14.234627   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234631   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234635   41675 command_runner.go:130] >       },
	I0315 06:48:14.234639   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234645   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234649   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234653   41675 command_runner.go:130] >     },
	I0315 06:48:14.234656   41675 command_runner.go:130] >     {
	I0315 06:48:14.234668   41675 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 06:48:14.234674   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234679   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 06:48:14.234685   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234689   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234696   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 06:48:14.234703   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 06:48:14.234706   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234710   41675 command_runner.go:130] >       "size": "74749335",
	I0315 06:48:14.234713   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234717   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234720   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234723   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234726   41675 command_runner.go:130] >     },
	I0315 06:48:14.234729   41675 command_runner.go:130] >     {
	I0315 06:48:14.234735   41675 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 06:48:14.234739   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234743   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 06:48:14.234747   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234751   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234760   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 06:48:14.234769   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 06:48:14.234773   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234777   41675 command_runner.go:130] >       "size": "61551410",
	I0315 06:48:14.234781   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234784   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234787   41675 command_runner.go:130] >       },
	I0315 06:48:14.234791   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234795   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234801   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234805   41675 command_runner.go:130] >     },
	I0315 06:48:14.234808   41675 command_runner.go:130] >     {
	I0315 06:48:14.234814   41675 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 06:48:14.234818   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234823   41675 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 06:48:14.234827   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234835   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234845   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 06:48:14.234852   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 06:48:14.234857   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234861   41675 command_runner.go:130] >       "size": "750414",
	I0315 06:48:14.234867   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234871   41675 command_runner.go:130] >         "value": "65535"
	I0315 06:48:14.234874   41675 command_runner.go:130] >       },
	I0315 06:48:14.234878   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234882   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234886   41675 command_runner.go:130] >       "pinned": true
	I0315 06:48:14.234889   41675 command_runner.go:130] >     }
	I0315 06:48:14.234892   41675 command_runner.go:130] >   ]
	I0315 06:48:14.234895   41675 command_runner.go:130] > }
	I0315 06:48:14.235053   41675 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:48:14.235064   41675 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:48:14.235117   41675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:48:14.278682   41675 command_runner.go:130] > {
	I0315 06:48:14.278706   41675 command_runner.go:130] >   "images": [
	I0315 06:48:14.278712   41675 command_runner.go:130] >     {
	I0315 06:48:14.278723   41675 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 06:48:14.278731   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278746   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 06:48:14.278751   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278759   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.278771   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 06:48:14.278785   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 06:48:14.278794   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278803   41675 command_runner.go:130] >       "size": "65258016",
	I0315 06:48:14.278810   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.278819   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.278835   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.278845   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.278849   41675 command_runner.go:130] >     },
	I0315 06:48:14.278858   41675 command_runner.go:130] >     {
	I0315 06:48:14.278876   41675 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 06:48:14.278886   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278894   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 06:48:14.278902   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278912   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.278925   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 06:48:14.278937   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 06:48:14.278946   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278952   41675 command_runner.go:130] >       "size": "65291810",
	I0315 06:48:14.278959   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.278967   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.278971   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.278975   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.278978   41675 command_runner.go:130] >     },
	I0315 06:48:14.278981   41675 command_runner.go:130] >     {
	I0315 06:48:14.278987   41675 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 06:48:14.278991   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278996   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 06:48:14.279003   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279007   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279016   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 06:48:14.279025   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 06:48:14.279031   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279035   41675 command_runner.go:130] >       "size": "1363676",
	I0315 06:48:14.279042   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279046   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279054   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279063   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279070   41675 command_runner.go:130] >     },
	I0315 06:48:14.279073   41675 command_runner.go:130] >     {
	I0315 06:48:14.279078   41675 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 06:48:14.279084   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279089   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 06:48:14.279093   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279097   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279104   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 06:48:14.279135   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 06:48:14.279141   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279145   41675 command_runner.go:130] >       "size": "31470524",
	I0315 06:48:14.279150   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279157   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279164   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279168   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279171   41675 command_runner.go:130] >     },
	I0315 06:48:14.279174   41675 command_runner.go:130] >     {
	I0315 06:48:14.279182   41675 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 06:48:14.279191   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279198   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 06:48:14.279208   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279214   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279230   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 06:48:14.279245   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 06:48:14.279254   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279260   41675 command_runner.go:130] >       "size": "53621675",
	I0315 06:48:14.279268   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279275   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279281   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279285   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279288   41675 command_runner.go:130] >     },
	I0315 06:48:14.279292   41675 command_runner.go:130] >     {
	I0315 06:48:14.279298   41675 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 06:48:14.279302   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279310   41675 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 06:48:14.279316   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279320   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279328   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 06:48:14.279337   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 06:48:14.279340   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279344   41675 command_runner.go:130] >       "size": "295456551",
	I0315 06:48:14.279348   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279352   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279358   41675 command_runner.go:130] >       },
	I0315 06:48:14.279366   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279370   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279376   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279380   41675 command_runner.go:130] >     },
	I0315 06:48:14.279383   41675 command_runner.go:130] >     {
	I0315 06:48:14.279389   41675 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 06:48:14.279395   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279400   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 06:48:14.279406   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279410   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279419   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 06:48:14.279429   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 06:48:14.279435   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279439   41675 command_runner.go:130] >       "size": "127226832",
	I0315 06:48:14.279442   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279446   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279452   41675 command_runner.go:130] >       },
	I0315 06:48:14.279456   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279460   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279466   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279470   41675 command_runner.go:130] >     },
	I0315 06:48:14.279473   41675 command_runner.go:130] >     {
	I0315 06:48:14.279479   41675 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 06:48:14.279485   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279490   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 06:48:14.279494   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279498   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279515   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 06:48:14.279525   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 06:48:14.279528   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279532   41675 command_runner.go:130] >       "size": "123261750",
	I0315 06:48:14.279536   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279540   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279543   41675 command_runner.go:130] >       },
	I0315 06:48:14.279547   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279551   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279562   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279568   41675 command_runner.go:130] >     },
	I0315 06:48:14.279570   41675 command_runner.go:130] >     {
	I0315 06:48:14.279580   41675 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 06:48:14.279586   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279591   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 06:48:14.279597   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279601   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279609   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 06:48:14.279618   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 06:48:14.279624   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279629   41675 command_runner.go:130] >       "size": "74749335",
	I0315 06:48:14.279633   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279640   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279643   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279647   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279652   41675 command_runner.go:130] >     },
	I0315 06:48:14.279656   41675 command_runner.go:130] >     {
	I0315 06:48:14.279664   41675 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 06:48:14.279668   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279675   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 06:48:14.279678   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279682   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279689   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 06:48:14.279699   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 06:48:14.279705   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279708   41675 command_runner.go:130] >       "size": "61551410",
	I0315 06:48:14.279712   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279716   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279722   41675 command_runner.go:130] >       },
	I0315 06:48:14.279726   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279732   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279735   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279738   41675 command_runner.go:130] >     },
	I0315 06:48:14.279744   41675 command_runner.go:130] >     {
	I0315 06:48:14.279752   41675 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 06:48:14.279760   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279767   41675 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 06:48:14.279771   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279777   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279783   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 06:48:14.279792   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 06:48:14.279796   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279800   41675 command_runner.go:130] >       "size": "750414",
	I0315 06:48:14.279803   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279807   41675 command_runner.go:130] >         "value": "65535"
	I0315 06:48:14.279813   41675 command_runner.go:130] >       },
	I0315 06:48:14.279817   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279822   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279826   41675 command_runner.go:130] >       "pinned": true
	I0315 06:48:14.279832   41675 command_runner.go:130] >     }
	I0315 06:48:14.279835   41675 command_runner.go:130] >   ]
	I0315 06:48:14.279838   41675 command_runner.go:130] > }
	I0315 06:48:14.279984   41675 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:48:14.279999   41675 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:48:14.280005   41675 kubeadm.go:928] updating node { 192.168.39.29 8443 v1.28.4 crio true true} ...
	I0315 06:48:14.280095   41675 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-763469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:48:14.280163   41675 ssh_runner.go:195] Run: crio config
	I0315 06:48:14.324997   41675 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0315 06:48:14.325028   41675 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0315 06:48:14.325039   41675 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0315 06:48:14.325045   41675 command_runner.go:130] > #
	I0315 06:48:14.325060   41675 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0315 06:48:14.325068   41675 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0315 06:48:14.325077   41675 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0315 06:48:14.325088   41675 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0315 06:48:14.325095   41675 command_runner.go:130] > # reload'.
	I0315 06:48:14.325104   41675 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0315 06:48:14.325116   41675 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0315 06:48:14.325128   41675 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0315 06:48:14.325137   41675 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0315 06:48:14.325157   41675 command_runner.go:130] > [crio]
	I0315 06:48:14.325166   41675 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0315 06:48:14.325178   41675 command_runner.go:130] > # containers images, in this directory.
	I0315 06:48:14.325185   41675 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0315 06:48:14.325199   41675 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0315 06:48:14.325209   41675 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0315 06:48:14.325220   41675 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0315 06:48:14.325236   41675 command_runner.go:130] > # imagestore = ""
	I0315 06:48:14.325248   41675 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0315 06:48:14.325261   41675 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0315 06:48:14.325274   41675 command_runner.go:130] > storage_driver = "overlay"
	I0315 06:48:14.325288   41675 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0315 06:48:14.325300   41675 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0315 06:48:14.325310   41675 command_runner.go:130] > storage_option = [
	I0315 06:48:14.325320   41675 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0315 06:48:14.325328   41675 command_runner.go:130] > ]
	I0315 06:48:14.325340   41675 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0315 06:48:14.325356   41675 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0315 06:48:14.325367   41675 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0315 06:48:14.325375   41675 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0315 06:48:14.325387   41675 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0315 06:48:14.325393   41675 command_runner.go:130] > # always happen on a node reboot
	I0315 06:48:14.325404   41675 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0315 06:48:14.325425   41675 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0315 06:48:14.325440   41675 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0315 06:48:14.325448   41675 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0315 06:48:14.325459   41675 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0315 06:48:14.325470   41675 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0315 06:48:14.325485   41675 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0315 06:48:14.325496   41675 command_runner.go:130] > # internal_wipe = true
	I0315 06:48:14.325507   41675 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0315 06:48:14.325519   41675 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0315 06:48:14.325531   41675 command_runner.go:130] > # internal_repair = false
	I0315 06:48:14.325543   41675 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0315 06:48:14.325557   41675 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0315 06:48:14.325566   41675 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0315 06:48:14.325586   41675 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0315 06:48:14.325599   41675 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0315 06:48:14.325608   41675 command_runner.go:130] > [crio.api]
	I0315 06:48:14.325618   41675 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0315 06:48:14.325628   41675 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0315 06:48:14.325636   41675 command_runner.go:130] > # IP address on which the stream server will listen.
	I0315 06:48:14.325645   41675 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0315 06:48:14.325655   41675 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0315 06:48:14.325665   41675 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0315 06:48:14.325671   41675 command_runner.go:130] > # stream_port = "0"
	I0315 06:48:14.325681   41675 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0315 06:48:14.325690   41675 command_runner.go:130] > # stream_enable_tls = false
	I0315 06:48:14.325699   41675 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0315 06:48:14.325709   41675 command_runner.go:130] > # stream_idle_timeout = ""
	I0315 06:48:14.325718   41675 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0315 06:48:14.325729   41675 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0315 06:48:14.325738   41675 command_runner.go:130] > # minutes.
	I0315 06:48:14.325744   41675 command_runner.go:130] > # stream_tls_cert = ""
	I0315 06:48:14.325761   41675 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0315 06:48:14.325772   41675 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0315 06:48:14.325779   41675 command_runner.go:130] > # stream_tls_key = ""
	I0315 06:48:14.325789   41675 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0315 06:48:14.325801   41675 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0315 06:48:14.325837   41675 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0315 06:48:14.325848   41675 command_runner.go:130] > # stream_tls_ca = ""
	I0315 06:48:14.325857   41675 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 06:48:14.325863   41675 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0315 06:48:14.325872   41675 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 06:48:14.325882   41675 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0315 06:48:14.325891   41675 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0315 06:48:14.325902   41675 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0315 06:48:14.325911   41675 command_runner.go:130] > [crio.runtime]
	I0315 06:48:14.325921   41675 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0315 06:48:14.325942   41675 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0315 06:48:14.325952   41675 command_runner.go:130] > # "nofile=1024:2048"
	I0315 06:48:14.325960   41675 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0315 06:48:14.325975   41675 command_runner.go:130] > # default_ulimits = [
	I0315 06:48:14.325982   41675 command_runner.go:130] > # ]
	I0315 06:48:14.325993   41675 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0315 06:48:14.326004   41675 command_runner.go:130] > # no_pivot = false
	I0315 06:48:14.326015   41675 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0315 06:48:14.326024   41675 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0315 06:48:14.326035   41675 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0315 06:48:14.326043   41675 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0315 06:48:14.326062   41675 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0315 06:48:14.326076   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 06:48:14.326086   41675 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0315 06:48:14.326098   41675 command_runner.go:130] > # Cgroup setting for conmon
	I0315 06:48:14.326111   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0315 06:48:14.326117   41675 command_runner.go:130] > conmon_cgroup = "pod"
	I0315 06:48:14.326129   41675 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0315 06:48:14.326138   41675 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0315 06:48:14.326150   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 06:48:14.326160   41675 command_runner.go:130] > conmon_env = [
	I0315 06:48:14.326171   41675 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 06:48:14.326179   41675 command_runner.go:130] > ]
	I0315 06:48:14.326189   41675 command_runner.go:130] > # Additional environment variables to set for all the
	I0315 06:48:14.326199   41675 command_runner.go:130] > # containers. These are overridden if set in the
	I0315 06:48:14.326208   41675 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0315 06:48:14.326217   41675 command_runner.go:130] > # default_env = [
	I0315 06:48:14.326222   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326239   41675 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0315 06:48:14.326252   41675 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0315 06:48:14.326260   41675 command_runner.go:130] > # selinux = false
	I0315 06:48:14.326269   41675 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0315 06:48:14.326279   41675 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0315 06:48:14.326294   41675 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0315 06:48:14.326302   41675 command_runner.go:130] > # seccomp_profile = ""
	I0315 06:48:14.326310   41675 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0315 06:48:14.326320   41675 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0315 06:48:14.326330   41675 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0315 06:48:14.326340   41675 command_runner.go:130] > # which might increase security.
	I0315 06:48:14.326358   41675 command_runner.go:130] > # This option is currently deprecated,
	I0315 06:48:14.326370   41675 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0315 06:48:14.326377   41675 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0315 06:48:14.326390   41675 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0315 06:48:14.326399   41675 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0315 06:48:14.326412   41675 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0315 06:48:14.326424   41675 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0315 06:48:14.326432   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.326444   41675 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0315 06:48:14.326457   41675 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0315 06:48:14.326467   41675 command_runner.go:130] > # the cgroup blockio controller.
	I0315 06:48:14.326474   41675 command_runner.go:130] > # blockio_config_file = ""
	I0315 06:48:14.326484   41675 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0315 06:48:14.326493   41675 command_runner.go:130] > # blockio parameters.
	I0315 06:48:14.326499   41675 command_runner.go:130] > # blockio_reload = false
	I0315 06:48:14.326512   41675 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0315 06:48:14.326520   41675 command_runner.go:130] > # irqbalance daemon.
	I0315 06:48:14.326528   41675 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0315 06:48:14.326540   41675 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0315 06:48:14.326553   41675 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0315 06:48:14.326566   41675 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0315 06:48:14.326584   41675 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0315 06:48:14.326597   41675 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0315 06:48:14.326607   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.326616   41675 command_runner.go:130] > # rdt_config_file = ""
	I0315 06:48:14.326625   41675 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0315 06:48:14.326634   41675 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0315 06:48:14.326677   41675 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0315 06:48:14.326687   41675 command_runner.go:130] > # separate_pull_cgroup = ""
	I0315 06:48:14.326696   41675 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0315 06:48:14.326707   41675 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0315 06:48:14.326713   41675 command_runner.go:130] > # will be added.
	I0315 06:48:14.326720   41675 command_runner.go:130] > # default_capabilities = [
	I0315 06:48:14.326725   41675 command_runner.go:130] > # 	"CHOWN",
	I0315 06:48:14.326733   41675 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0315 06:48:14.326742   41675 command_runner.go:130] > # 	"FSETID",
	I0315 06:48:14.326755   41675 command_runner.go:130] > # 	"FOWNER",
	I0315 06:48:14.326764   41675 command_runner.go:130] > # 	"SETGID",
	I0315 06:48:14.326769   41675 command_runner.go:130] > # 	"SETUID",
	I0315 06:48:14.326778   41675 command_runner.go:130] > # 	"SETPCAP",
	I0315 06:48:14.326784   41675 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0315 06:48:14.326793   41675 command_runner.go:130] > # 	"KILL",
	I0315 06:48:14.326797   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326808   41675 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0315 06:48:14.326817   41675 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0315 06:48:14.326828   41675 command_runner.go:130] > # add_inheritable_capabilities = false
	I0315 06:48:14.326841   41675 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0315 06:48:14.326852   41675 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 06:48:14.326859   41675 command_runner.go:130] > # default_sysctls = [
	I0315 06:48:14.326867   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326874   41675 command_runner.go:130] > # List of devices on the host that a
	I0315 06:48:14.326887   41675 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0315 06:48:14.326897   41675 command_runner.go:130] > # allowed_devices = [
	I0315 06:48:14.326903   41675 command_runner.go:130] > # 	"/dev/fuse",
	I0315 06:48:14.326909   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326916   41675 command_runner.go:130] > # List of additional devices. specified as
	I0315 06:48:14.326926   41675 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0315 06:48:14.326936   41675 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0315 06:48:14.326949   41675 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 06:48:14.326958   41675 command_runner.go:130] > # additional_devices = [
	I0315 06:48:14.326963   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326976   41675 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0315 06:48:14.326981   41675 command_runner.go:130] > # cdi_spec_dirs = [
	I0315 06:48:14.326986   41675 command_runner.go:130] > # 	"/etc/cdi",
	I0315 06:48:14.326996   41675 command_runner.go:130] > # 	"/var/run/cdi",
	I0315 06:48:14.327002   41675 command_runner.go:130] > # ]
	I0315 06:48:14.327011   41675 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0315 06:48:14.327024   41675 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0315 06:48:14.327030   41675 command_runner.go:130] > # Defaults to false.
	I0315 06:48:14.327038   41675 command_runner.go:130] > # device_ownership_from_security_context = false
	I0315 06:48:14.327047   41675 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0315 06:48:14.327059   41675 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0315 06:48:14.327076   41675 command_runner.go:130] > # hooks_dir = [
	I0315 06:48:14.327086   41675 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0315 06:48:14.327092   41675 command_runner.go:130] > # ]
	I0315 06:48:14.327101   41675 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0315 06:48:14.327113   41675 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0315 06:48:14.327121   41675 command_runner.go:130] > # its default mounts from the following two files:
	I0315 06:48:14.327129   41675 command_runner.go:130] > #
	I0315 06:48:14.327139   41675 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0315 06:48:14.327150   41675 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0315 06:48:14.327161   41675 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0315 06:48:14.327168   41675 command_runner.go:130] > #
	I0315 06:48:14.327178   41675 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0315 06:48:14.327191   41675 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0315 06:48:14.327201   41675 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0315 06:48:14.327212   41675 command_runner.go:130] > #      only add mounts it finds in this file.
	I0315 06:48:14.327220   41675 command_runner.go:130] > #
	I0315 06:48:14.327226   41675 command_runner.go:130] > # default_mounts_file = ""
	I0315 06:48:14.327244   41675 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0315 06:48:14.327257   41675 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0315 06:48:14.327266   41675 command_runner.go:130] > pids_limit = 1024
	I0315 06:48:14.327278   41675 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0315 06:48:14.327289   41675 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0315 06:48:14.327300   41675 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0315 06:48:14.327313   41675 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0315 06:48:14.327322   41675 command_runner.go:130] > # log_size_max = -1
	I0315 06:48:14.327332   41675 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0315 06:48:14.327341   41675 command_runner.go:130] > # log_to_journald = false
	I0315 06:48:14.327350   41675 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0315 06:48:14.327370   41675 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0315 06:48:14.327385   41675 command_runner.go:130] > # Path to directory for container attach sockets.
	I0315 06:48:14.327400   41675 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0315 06:48:14.327412   41675 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0315 06:48:14.327421   41675 command_runner.go:130] > # bind_mount_prefix = ""
	I0315 06:48:14.327429   41675 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0315 06:48:14.327437   41675 command_runner.go:130] > # read_only = false
	I0315 06:48:14.327447   41675 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0315 06:48:14.327467   41675 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0315 06:48:14.327478   41675 command_runner.go:130] > # live configuration reload.
	I0315 06:48:14.327484   41675 command_runner.go:130] > # log_level = "info"
	I0315 06:48:14.327498   41675 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0315 06:48:14.327505   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.327511   41675 command_runner.go:130] > # log_filter = ""
	I0315 06:48:14.327519   41675 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0315 06:48:14.327528   41675 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0315 06:48:14.327538   41675 command_runner.go:130] > # separated by comma.
	I0315 06:48:14.327548   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327557   41675 command_runner.go:130] > # uid_mappings = ""
	I0315 06:48:14.327566   41675 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0315 06:48:14.327579   41675 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0315 06:48:14.327587   41675 command_runner.go:130] > # separated by comma.
	I0315 06:48:14.327602   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327611   41675 command_runner.go:130] > # gid_mappings = ""
	I0315 06:48:14.327622   41675 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0315 06:48:14.327642   41675 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 06:48:14.327655   41675 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 06:48:14.327670   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327680   41675 command_runner.go:130] > # minimum_mappable_uid = -1
	I0315 06:48:14.327693   41675 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0315 06:48:14.327706   41675 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 06:48:14.327718   41675 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 06:48:14.327732   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327741   41675 command_runner.go:130] > # minimum_mappable_gid = -1
	I0315 06:48:14.327754   41675 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0315 06:48:14.327766   41675 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0315 06:48:14.327778   41675 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0315 06:48:14.327788   41675 command_runner.go:130] > # ctr_stop_timeout = 30
	I0315 06:48:14.327805   41675 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0315 06:48:14.327817   41675 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0315 06:48:14.327828   41675 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0315 06:48:14.327840   41675 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0315 06:48:14.327855   41675 command_runner.go:130] > drop_infra_ctr = false
	I0315 06:48:14.327868   41675 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0315 06:48:14.327885   41675 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0315 06:48:14.327900   41675 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0315 06:48:14.327909   41675 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0315 06:48:14.327923   41675 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0315 06:48:14.327936   41675 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0315 06:48:14.327948   41675 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0315 06:48:14.327959   41675 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0315 06:48:14.327968   41675 command_runner.go:130] > # shared_cpuset = ""
	I0315 06:48:14.327981   41675 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0315 06:48:14.327992   41675 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0315 06:48:14.328001   41675 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0315 06:48:14.328015   41675 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0315 06:48:14.328024   41675 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0315 06:48:14.328036   41675 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0315 06:48:14.328049   41675 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0315 06:48:14.328058   41675 command_runner.go:130] > # enable_criu_support = false
	I0315 06:48:14.328066   41675 command_runner.go:130] > # Enable/disable the generation of the container,
	I0315 06:48:14.328076   41675 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0315 06:48:14.328085   41675 command_runner.go:130] > # enable_pod_events = false
	I0315 06:48:14.328093   41675 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 06:48:14.328104   41675 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 06:48:14.328114   41675 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0315 06:48:14.328123   41675 command_runner.go:130] > # default_runtime = "runc"
	I0315 06:48:14.328130   41675 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0315 06:48:14.328144   41675 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0315 06:48:14.328161   41675 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0315 06:48:14.328171   41675 command_runner.go:130] > # creation as a file is not desired either.
	I0315 06:48:14.328192   41675 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0315 06:48:14.328205   41675 command_runner.go:130] > # the hostname is being managed dynamically.
	I0315 06:48:14.328215   41675 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0315 06:48:14.328219   41675 command_runner.go:130] > # ]
	I0315 06:48:14.328228   41675 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0315 06:48:14.328246   41675 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0315 06:48:14.328259   41675 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0315 06:48:14.328270   41675 command_runner.go:130] > # Each entry in the table should follow the format:
	I0315 06:48:14.328274   41675 command_runner.go:130] > #
	I0315 06:48:14.328287   41675 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0315 06:48:14.328298   41675 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0315 06:48:14.328306   41675 command_runner.go:130] > # runtime_type = "oci"
	I0315 06:48:14.328360   41675 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0315 06:48:14.328373   41675 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0315 06:48:14.328378   41675 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0315 06:48:14.328385   41675 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0315 06:48:14.328392   41675 command_runner.go:130] > # monitor_env = []
	I0315 06:48:14.328400   41675 command_runner.go:130] > # privileged_without_host_devices = false
	I0315 06:48:14.328408   41675 command_runner.go:130] > # allowed_annotations = []
	I0315 06:48:14.328418   41675 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0315 06:48:14.328426   41675 command_runner.go:130] > # Where:
	I0315 06:48:14.328436   41675 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0315 06:48:14.328447   41675 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0315 06:48:14.328462   41675 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0315 06:48:14.328487   41675 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0315 06:48:14.328492   41675 command_runner.go:130] > #   in $PATH.
	I0315 06:48:14.328504   41675 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0315 06:48:14.328513   41675 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0315 06:48:14.328521   41675 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0315 06:48:14.328529   41675 command_runner.go:130] > #   state.
	I0315 06:48:14.328537   41675 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0315 06:48:14.328548   41675 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0315 06:48:14.328559   41675 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0315 06:48:14.328569   41675 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0315 06:48:14.328580   41675 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0315 06:48:14.328592   41675 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0315 06:48:14.328598   41675 command_runner.go:130] > #   The currently recognized values are:
	I0315 06:48:14.328610   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0315 06:48:14.328622   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0315 06:48:14.328637   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0315 06:48:14.328651   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0315 06:48:14.328664   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0315 06:48:14.328684   41675 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0315 06:48:14.328697   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0315 06:48:14.328709   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0315 06:48:14.328725   41675 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0315 06:48:14.328737   41675 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0315 06:48:14.328746   41675 command_runner.go:130] > #   deprecated option "conmon".
	I0315 06:48:14.328758   41675 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0315 06:48:14.328769   41675 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0315 06:48:14.328782   41675 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0315 06:48:14.328791   41675 command_runner.go:130] > #   should be moved to the container's cgroup
	I0315 06:48:14.328801   41675 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0315 06:48:14.328825   41675 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0315 06:48:14.328837   41675 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0315 06:48:14.328847   41675 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0315 06:48:14.328854   41675 command_runner.go:130] > #
	I0315 06:48:14.328860   41675 command_runner.go:130] > # Using the seccomp notifier feature:
	I0315 06:48:14.328867   41675 command_runner.go:130] > #
	I0315 06:48:14.328875   41675 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0315 06:48:14.328887   41675 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0315 06:48:14.328894   41675 command_runner.go:130] > #
	I0315 06:48:14.328903   41675 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0315 06:48:14.328914   41675 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0315 06:48:14.328921   41675 command_runner.go:130] > #
	I0315 06:48:14.328928   41675 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0315 06:48:14.328936   41675 command_runner.go:130] > # feature.
	I0315 06:48:14.328940   41675 command_runner.go:130] > #
	I0315 06:48:14.328952   41675 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0315 06:48:14.328965   41675 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0315 06:48:14.328978   41675 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0315 06:48:14.328989   41675 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0315 06:48:14.329000   41675 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0315 06:48:14.329007   41675 command_runner.go:130] > #
	I0315 06:48:14.329015   41675 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0315 06:48:14.329027   41675 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0315 06:48:14.329036   41675 command_runner.go:130] > #
	I0315 06:48:14.329049   41675 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0315 06:48:14.329060   41675 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0315 06:48:14.329068   41675 command_runner.go:130] > #
	I0315 06:48:14.329078   41675 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0315 06:48:14.329096   41675 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0315 06:48:14.329104   41675 command_runner.go:130] > # limitation.
	I0315 06:48:14.329113   41675 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0315 06:48:14.329118   41675 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0315 06:48:14.329127   41675 command_runner.go:130] > runtime_type = "oci"
	I0315 06:48:14.329135   41675 command_runner.go:130] > runtime_root = "/run/runc"
	I0315 06:48:14.329145   41675 command_runner.go:130] > runtime_config_path = ""
	I0315 06:48:14.329155   41675 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0315 06:48:14.329166   41675 command_runner.go:130] > monitor_cgroup = "pod"
	I0315 06:48:14.329176   41675 command_runner.go:130] > monitor_exec_cgroup = ""
	I0315 06:48:14.329183   41675 command_runner.go:130] > monitor_env = [
	I0315 06:48:14.329192   41675 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 06:48:14.329200   41675 command_runner.go:130] > ]
	I0315 06:48:14.329207   41675 command_runner.go:130] > privileged_without_host_devices = false
	I0315 06:48:14.329221   41675 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0315 06:48:14.329237   41675 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0315 06:48:14.329250   41675 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0315 06:48:14.329260   41675 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0315 06:48:14.329274   41675 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0315 06:48:14.329284   41675 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0315 06:48:14.329299   41675 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0315 06:48:14.329313   41675 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0315 06:48:14.329324   41675 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0315 06:48:14.329337   41675 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0315 06:48:14.329344   41675 command_runner.go:130] > # Example:
	I0315 06:48:14.329351   41675 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0315 06:48:14.329360   41675 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0315 06:48:14.329370   41675 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0315 06:48:14.329381   41675 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0315 06:48:14.329389   41675 command_runner.go:130] > # cpuset = 0
	I0315 06:48:14.329398   41675 command_runner.go:130] > # cpushares = "0-1"
	I0315 06:48:14.329406   41675 command_runner.go:130] > # Where:
	I0315 06:48:14.329415   41675 command_runner.go:130] > # The workload name is workload-type.
	I0315 06:48:14.329426   41675 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0315 06:48:14.329438   41675 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0315 06:48:14.329446   41675 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0315 06:48:14.329464   41675 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0315 06:48:14.329472   41675 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0315 06:48:14.329479   41675 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0315 06:48:14.329488   41675 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0315 06:48:14.329494   41675 command_runner.go:130] > # Default value is set to true
	I0315 06:48:14.329505   41675 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0315 06:48:14.329512   41675 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0315 06:48:14.329522   41675 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0315 06:48:14.329532   41675 command_runner.go:130] > # Default value is set to 'false'
	I0315 06:48:14.329542   41675 command_runner.go:130] > # disable_hostport_mapping = false
	I0315 06:48:14.329553   41675 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0315 06:48:14.329561   41675 command_runner.go:130] > #
	I0315 06:48:14.329571   41675 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0315 06:48:14.329583   41675 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0315 06:48:14.329595   41675 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0315 06:48:14.329607   41675 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0315 06:48:14.329617   41675 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0315 06:48:14.329623   41675 command_runner.go:130] > [crio.image]
	I0315 06:48:14.329633   41675 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0315 06:48:14.329643   41675 command_runner.go:130] > # default_transport = "docker://"
	I0315 06:48:14.329651   41675 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0315 06:48:14.329664   41675 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0315 06:48:14.329669   41675 command_runner.go:130] > # global_auth_file = ""
	I0315 06:48:14.329677   41675 command_runner.go:130] > # The image used to instantiate infra containers.
	I0315 06:48:14.329685   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.329696   41675 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0315 06:48:14.329707   41675 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0315 06:48:14.329716   41675 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0315 06:48:14.329726   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.329732   41675 command_runner.go:130] > # pause_image_auth_file = ""
	I0315 06:48:14.329743   41675 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0315 06:48:14.329756   41675 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0315 06:48:14.329769   41675 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0315 06:48:14.329786   41675 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0315 06:48:14.329796   41675 command_runner.go:130] > # pause_command = "/pause"
	I0315 06:48:14.329806   41675 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0315 06:48:14.329825   41675 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0315 06:48:14.329837   41675 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0315 06:48:14.329849   41675 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0315 06:48:14.329859   41675 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0315 06:48:14.329868   41675 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0315 06:48:14.329872   41675 command_runner.go:130] > # pinned_images = [
	I0315 06:48:14.329875   41675 command_runner.go:130] > # ]
	I0315 06:48:14.329881   41675 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0315 06:48:14.329890   41675 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0315 06:48:14.329901   41675 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0315 06:48:14.329911   41675 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0315 06:48:14.329916   41675 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0315 06:48:14.329922   41675 command_runner.go:130] > # signature_policy = ""
	I0315 06:48:14.329927   41675 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0315 06:48:14.329936   41675 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0315 06:48:14.329943   41675 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0315 06:48:14.329949   41675 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0315 06:48:14.329957   41675 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0315 06:48:14.329961   41675 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0315 06:48:14.329970   41675 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0315 06:48:14.329975   41675 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0315 06:48:14.329982   41675 command_runner.go:130] > # changing them here.
	I0315 06:48:14.329985   41675 command_runner.go:130] > # insecure_registries = [
	I0315 06:48:14.329990   41675 command_runner.go:130] > # ]
	I0315 06:48:14.329996   41675 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0315 06:48:14.330004   41675 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0315 06:48:14.330008   41675 command_runner.go:130] > # image_volumes = "mkdir"
	I0315 06:48:14.330013   41675 command_runner.go:130] > # Temporary directory to use for storing big files
	I0315 06:48:14.330019   41675 command_runner.go:130] > # big_files_temporary_dir = ""
	I0315 06:48:14.330024   41675 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0315 06:48:14.330030   41675 command_runner.go:130] > # CNI plugins.
	I0315 06:48:14.330033   41675 command_runner.go:130] > [crio.network]
	I0315 06:48:14.330038   41675 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0315 06:48:14.330046   41675 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0315 06:48:14.330054   41675 command_runner.go:130] > # cni_default_network = ""
	I0315 06:48:14.330062   41675 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0315 06:48:14.330074   41675 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0315 06:48:14.330082   41675 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0315 06:48:14.330086   41675 command_runner.go:130] > # plugin_dirs = [
	I0315 06:48:14.330091   41675 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0315 06:48:14.330094   41675 command_runner.go:130] > # ]
	I0315 06:48:14.330102   41675 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0315 06:48:14.330105   41675 command_runner.go:130] > [crio.metrics]
	I0315 06:48:14.330110   41675 command_runner.go:130] > # Globally enable or disable metrics support.
	I0315 06:48:14.330113   41675 command_runner.go:130] > enable_metrics = true
	I0315 06:48:14.330120   41675 command_runner.go:130] > # Specify enabled metrics collectors.
	I0315 06:48:14.330124   41675 command_runner.go:130] > # Per default all metrics are enabled.
	I0315 06:48:14.330133   41675 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0315 06:48:14.330139   41675 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0315 06:48:14.330147   41675 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0315 06:48:14.330151   41675 command_runner.go:130] > # metrics_collectors = [
	I0315 06:48:14.330157   41675 command_runner.go:130] > # 	"operations",
	I0315 06:48:14.330161   41675 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0315 06:48:14.330165   41675 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0315 06:48:14.330173   41675 command_runner.go:130] > # 	"operations_errors",
	I0315 06:48:14.330177   41675 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0315 06:48:14.330184   41675 command_runner.go:130] > # 	"image_pulls_by_name",
	I0315 06:48:14.330191   41675 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0315 06:48:14.330200   41675 command_runner.go:130] > # 	"image_pulls_failures",
	I0315 06:48:14.330206   41675 command_runner.go:130] > # 	"image_pulls_successes",
	I0315 06:48:14.330216   41675 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0315 06:48:14.330222   41675 command_runner.go:130] > # 	"image_layer_reuse",
	I0315 06:48:14.330237   41675 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0315 06:48:14.330246   41675 command_runner.go:130] > # 	"containers_oom_total",
	I0315 06:48:14.330252   41675 command_runner.go:130] > # 	"containers_oom",
	I0315 06:48:14.330261   41675 command_runner.go:130] > # 	"processes_defunct",
	I0315 06:48:14.330267   41675 command_runner.go:130] > # 	"operations_total",
	I0315 06:48:14.330276   41675 command_runner.go:130] > # 	"operations_latency_seconds",
	I0315 06:48:14.330283   41675 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0315 06:48:14.330293   41675 command_runner.go:130] > # 	"operations_errors_total",
	I0315 06:48:14.330299   41675 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0315 06:48:14.330309   41675 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0315 06:48:14.330322   41675 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0315 06:48:14.330332   41675 command_runner.go:130] > # 	"image_pulls_success_total",
	I0315 06:48:14.330339   41675 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0315 06:48:14.330348   41675 command_runner.go:130] > # 	"containers_oom_count_total",
	I0315 06:48:14.330358   41675 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0315 06:48:14.330369   41675 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0315 06:48:14.330374   41675 command_runner.go:130] > # ]
	I0315 06:48:14.330385   41675 command_runner.go:130] > # The port on which the metrics server will listen.
	I0315 06:48:14.330395   41675 command_runner.go:130] > # metrics_port = 9090
	I0315 06:48:14.330406   41675 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0315 06:48:14.330414   41675 command_runner.go:130] > # metrics_socket = ""
	I0315 06:48:14.330423   41675 command_runner.go:130] > # The certificate for the secure metrics server.
	I0315 06:48:14.330434   41675 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0315 06:48:14.330441   41675 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0315 06:48:14.330448   41675 command_runner.go:130] > # certificate on any modification event.
	I0315 06:48:14.330451   41675 command_runner.go:130] > # metrics_cert = ""
	I0315 06:48:14.330456   41675 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0315 06:48:14.330463   41675 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0315 06:48:14.330467   41675 command_runner.go:130] > # metrics_key = ""
	I0315 06:48:14.330472   41675 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0315 06:48:14.330478   41675 command_runner.go:130] > [crio.tracing]
	I0315 06:48:14.330483   41675 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0315 06:48:14.330488   41675 command_runner.go:130] > # enable_tracing = false
	I0315 06:48:14.330493   41675 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0315 06:48:14.330500   41675 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0315 06:48:14.330506   41675 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0315 06:48:14.330514   41675 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0315 06:48:14.330518   41675 command_runner.go:130] > # CRI-O NRI configuration.
	I0315 06:48:14.330524   41675 command_runner.go:130] > [crio.nri]
	I0315 06:48:14.330528   41675 command_runner.go:130] > # Globally enable or disable NRI.
	I0315 06:48:14.330531   41675 command_runner.go:130] > # enable_nri = false
	I0315 06:48:14.330535   41675 command_runner.go:130] > # NRI socket to listen on.
	I0315 06:48:14.330540   41675 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0315 06:48:14.330546   41675 command_runner.go:130] > # NRI plugin directory to use.
	I0315 06:48:14.330551   41675 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0315 06:48:14.330558   41675 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0315 06:48:14.330568   41675 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0315 06:48:14.330575   41675 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0315 06:48:14.330580   41675 command_runner.go:130] > # nri_disable_connections = false
	I0315 06:48:14.330586   41675 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0315 06:48:14.330590   41675 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0315 06:48:14.330598   41675 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0315 06:48:14.330602   41675 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0315 06:48:14.330610   41675 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0315 06:48:14.330613   41675 command_runner.go:130] > [crio.stats]
	I0315 06:48:14.330619   41675 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0315 06:48:14.330626   41675 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0315 06:48:14.330631   41675 command_runner.go:130] > # stats_collection_period = 0
	I0315 06:48:14.330665   41675 command_runner.go:130] ! time="2024-03-15 06:48:14.294307123Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0315 06:48:14.330685   41675 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0315 06:48:14.330811   41675 cni.go:84] Creating CNI manager for ""
	I0315 06:48:14.330825   41675 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:48:14.330834   41675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:48:14.330851   41675 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-763469 NodeName:multinode-763469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:48:14.330989   41675 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-763469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:48:14.331055   41675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:48:14.343061   41675 command_runner.go:130] > kubeadm
	I0315 06:48:14.343084   41675 command_runner.go:130] > kubectl
	I0315 06:48:14.343089   41675 command_runner.go:130] > kubelet
	I0315 06:48:14.343106   41675 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:48:14.343148   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 06:48:14.354347   41675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0315 06:48:14.374506   41675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:48:14.394211   41675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0315 06:48:14.415822   41675 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0315 06:48:14.420096   41675 command_runner.go:130] > 192.168.39.29	control-plane.minikube.internal
	I0315 06:48:14.420172   41675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:48:14.577125   41675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:48:14.593458   41675 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469 for IP: 192.168.39.29
	I0315 06:48:14.593487   41675 certs.go:194] generating shared ca certs ...
	I0315 06:48:14.593526   41675 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:48:14.593688   41675 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:48:14.593755   41675 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:48:14.593768   41675 certs.go:256] generating profile certs ...
	I0315 06:48:14.593864   41675 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/client.key
	I0315 06:48:14.593939   41675 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key.722d4f19
	I0315 06:48:14.593999   41675 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key
	I0315 06:48:14.594013   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:48:14.594030   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:48:14.594045   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:48:14.594063   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:48:14.594078   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:48:14.594095   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:48:14.594105   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:48:14.594114   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:48:14.594162   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:48:14.594191   41675 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:48:14.594202   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:48:14.594242   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:48:14.594289   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:48:14.594325   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:48:14.594395   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:48:14.594428   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.594441   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.594452   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.594987   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:48:14.620174   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:48:14.644258   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:48:14.668654   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:48:14.692544   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 06:48:14.717635   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:48:14.742686   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:48:14.767487   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 06:48:14.792204   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:48:14.817523   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:48:14.847861   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:48:14.877639   41675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:48:14.897129   41675 ssh_runner.go:195] Run: openssl version
	I0315 06:48:14.903719   41675 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0315 06:48:14.903794   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:48:14.916303   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921175   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921269   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921332   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.927446   41675 command_runner.go:130] > 51391683
	I0315 06:48:14.927523   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:48:14.939131   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:48:14.951876   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957109   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957143   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957207   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.962857   41675 command_runner.go:130] > 3ec20f2e
	I0315 06:48:14.962933   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:48:14.973317   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:48:14.985422   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.989995   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.990023   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.990071   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.996480   41675 command_runner.go:130] > b5213941
	I0315 06:48:14.996811   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:48:15.007049   41675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:48:15.011510   41675 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:48:15.011528   41675 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0315 06:48:15.011534   41675 command_runner.go:130] > Device: 253,1	Inode: 9432637     Links: 1
	I0315 06:48:15.011541   41675 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 06:48:15.011546   41675 command_runner.go:130] > Access: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011552   41675 command_runner.go:130] > Modify: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011557   41675 command_runner.go:130] > Change: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011561   41675 command_runner.go:130] >  Birth: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011690   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:48:15.017529   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.017596   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:48:15.023075   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.023221   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:48:15.028940   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.029014   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:48:15.034501   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.034675   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:48:15.040404   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.040477   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:48:15.046469   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.046636   41675 kubeadm.go:391] StartCluster: {Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:48:15.046781   41675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:48:15.046828   41675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:48:15.088618   41675 command_runner.go:130] > 5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337
	I0315 06:48:15.088650   41675 command_runner.go:130] > 4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee
	I0315 06:48:15.088659   41675 command_runner.go:130] > 2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0
	I0315 06:48:15.088670   41675 command_runner.go:130] > 5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c
	I0315 06:48:15.088812   41675 command_runner.go:130] > 41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555
	I0315 06:48:15.089071   41675 command_runner.go:130] > 68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948
	I0315 06:48:15.089098   41675 command_runner.go:130] > e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe
	I0315 06:48:15.089242   41675 command_runner.go:130] > 26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886
	I0315 06:48:15.090814   41675 cri.go:89] found id: "5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337"
	I0315 06:48:15.090829   41675 cri.go:89] found id: "4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee"
	I0315 06:48:15.090832   41675 cri.go:89] found id: "2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0"
	I0315 06:48:15.090836   41675 cri.go:89] found id: "5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c"
	I0315 06:48:15.090838   41675 cri.go:89] found id: "41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555"
	I0315 06:48:15.090841   41675 cri.go:89] found id: "68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948"
	I0315 06:48:15.090844   41675 cri.go:89] found id: "e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe"
	I0315 06:48:15.090846   41675 cri.go:89] found id: "26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886"
	I0315 06:48:15.090859   41675 cri.go:89] found id: ""
	I0315 06:48:15.090908   41675 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.646540135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485382646516216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c09fc2d6-e98c-4726-9bb9-db2c6736247d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.647106463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df275e3a-8556-4a40-bff6-b86ad8a52c08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.647164470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df275e3a-8556-4a40-bff6-b86ad8a52c08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.647489187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df275e3a-8556-4a40-bff6-b86ad8a52c08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.701316796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68bcfd41-03c1-40e7-8b32-88f5274750ab name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.701419752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68bcfd41-03c1-40e7-8b32-88f5274750ab name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.702713895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6ea50e4-f742-47e4-8311-46fe88e32a03 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.703441067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485382703416805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6ea50e4-f742-47e4-8311-46fe88e32a03 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.704334165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c482317b-31ba-460b-9360-b2574f6072dc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.704409337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c482317b-31ba-460b-9360-b2574f6072dc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.706111498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c482317b-31ba-460b-9360-b2574f6072dc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.762748970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cce102b7-b695-4a76-b0f8-4bb4b4b52ac8 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.762896507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cce102b7-b695-4a76-b0f8-4bb4b4b52ac8 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.764389739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b307f1c-8e83-4ad6-8c25-157c6f07670e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.764954200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485382764917442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b307f1c-8e83-4ad6-8c25-157c6f07670e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.765470868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c825154b-8c74-4104-8567-c9aa8fd9535a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.765529586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c825154b-8c74-4104-8567-c9aa8fd9535a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.766328096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c825154b-8c74-4104-8567-c9aa8fd9535a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.814438745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5f97ee9-9a1d-4a4c-af6a-2b0d488fc1e1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.814535772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5f97ee9-9a1d-4a4c-af6a-2b0d488fc1e1 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.815475745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ff1ca53-4662-461b-b944-0c34a4dba5e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.815985334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485382815961666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ff1ca53-4662-461b-b944-0c34a4dba5e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.816548516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f6aafe6-0b4e-4899-842a-c805699a6a36 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.816621732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f6aafe6-0b4e-4899-842a-c805699a6a36 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:49:42 multinode-763469 crio[2833]: time="2024-03-15 06:49:42.817087992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f6aafe6-0b4e-4899-842a-c805699a6a36 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fa90752894034       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   834e045b86375       busybox-5b5d89c9d6-tsdl7
	17be560545c16       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   f6f04004382d0       kindnet-r6vss
	575b095c9b6ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   56051dba7ac11       coredns-5dd5756b68-x6j8r
	b52525fcaadc2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b896f99a00ffc       storage-provisioner
	252417b5766a5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   8ef3b383c17ca       kube-proxy-zbg48
	b95fae7e21b3c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   9975d5c805a31       kube-scheduler-multinode-763469
	df9f4d76cb959       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   7d842a768b7ce       kube-apiserver-multinode-763469
	1a0631cbffdeb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   42b03cdc51414       kube-controller-manager-multinode-763469
	4c9c05513bc4c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   bcc63ec7ba04f       etcd-multinode-763469
	de2426966c6f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   2be90b6664676       busybox-5b5d89c9d6-tsdl7
	5b2463b16c7ce       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   91971b4934cb0       coredns-5dd5756b68-x6j8r
	4d3449c0f3016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   dda1a3b456791       storage-provisioner
	2b9c8e78c1a0c       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   968021900c2b1       kindnet-r6vss
	5b1efdd4fe112       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   88c3400102352       kube-proxy-zbg48
	41d71a9d86c83       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   36458949e6331       etcd-multinode-763469
	68a042e4e4694       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   7604fe166214c       kube-controller-manager-multinode-763469
	e4cf73083a60d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   9d323dcc0e31a       kube-apiserver-multinode-763469
	26e6081b0c5f0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   27f1756dc37d8       kube-scheduler-multinode-763469
	
	
	==> coredns [575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60665 - 45009 "HINFO IN 1386717294849346258.5119469821599325126. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00890861s
	
	
	==> coredns [5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337] <==
	[INFO] 10.244.0.3:52053 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001984596s
	[INFO] 10.244.0.3:47293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079275s
	[INFO] 10.244.0.3:42531 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055645s
	[INFO] 10.244.0.3:55263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001275632s
	[INFO] 10.244.0.3:48514 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078172s
	[INFO] 10.244.0.3:51364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050546s
	[INFO] 10.244.0.3:59743 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007089s
	[INFO] 10.244.1.2:48857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169242s
	[INFO] 10.244.1.2:42639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108367s
	[INFO] 10.244.1.2:54971 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134511s
	[INFO] 10.244.1.2:59158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071619s
	[INFO] 10.244.0.3:46690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105523s
	[INFO] 10.244.0.3:40047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094632s
	[INFO] 10.244.0.3:50430 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061466s
	[INFO] 10.244.0.3:52007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076394s
	[INFO] 10.244.1.2:48901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159478s
	[INFO] 10.244.1.2:39733 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192713s
	[INFO] 10.244.1.2:39738 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000234906s
	[INFO] 10.244.1.2:45872 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134492s
	[INFO] 10.244.0.3:42134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106099s
	[INFO] 10.244.0.3:42483 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121431s
	[INFO] 10.244.0.3:49061 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063754s
	[INFO] 10.244.0.3:35844 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098087s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-763469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-763469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=multinode-763469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_42_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-763469
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:49:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-763469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b034b4c5ab34dcab4dd3f5b0751ccfd
	  System UUID:                9b034b4c-5ab3-4dca-b4dd-3f5b0751ccfd
	  Boot ID:                    7eeb4a26-f179-434e-abfe-6a7b68cb5c71
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tsdl7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 coredns-5dd5756b68-x6j8r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m23s
	  kube-system                 etcd-multinode-763469                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m37s
	  kube-system                 kindnet-r6vss                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m23s
	  kube-system                 kube-apiserver-multinode-763469             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-controller-manager-multinode-763469    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-proxy-zbg48                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-multinode-763469             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m21s                  kube-proxy       
	  Normal  Starting                 80s                    kube-proxy       
	  Normal  Starting                 7m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m43s (x8 over 7m43s)  kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s (x8 over 7m43s)  kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s (x7 over 7m43s)  kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m36s                  kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m36s                  kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s                  kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m36s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m23s                  node-controller  Node multinode-763469 event: Registered Node multinode-763469 in Controller
	  Normal  NodeReady                7m18s                  kubelet          Node multinode-763469 status is now: NodeReady
	  Normal  Starting                 87s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node multinode-763469 event: Registered Node multinode-763469 in Controller
	
	
	Name:               multinode-763469-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-763469-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=multinode-763469
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_49_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-763469-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:49:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:49:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:49:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:49:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:49:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-763469-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 4715799d3e8f4535b9aad0aadf936ade
	  System UUID:                4715799d-3e8f-4535-b9aa-d0aadf936ade
	  Boot ID:                    0a9c0c55-d554-4a2a-bcd5-36c90cd746e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-pk8lw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-zfcwm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-proxy-b8jmp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m39s                  kube-proxy  
	  Normal  Starting                 38s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m45s (x5 over 6m46s)  kubelet     Node multinode-763469-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s (x5 over 6m46s)  kubelet     Node multinode-763469-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x5 over 6m46s)  kubelet     Node multinode-763469-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m36s                  kubelet     Node multinode-763469-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  42s (x5 over 43s)      kubelet     Node multinode-763469-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 43s)      kubelet     Node multinode-763469-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 43s)      kubelet     Node multinode-763469-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                34s                    kubelet     Node multinode-763469-m02 status is now: NodeReady
	
	
	Name:               multinode-763469-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-763469-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=multinode-763469
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_49_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:49:31 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-763469-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:49:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:49:39 +0000   Fri, 15 Mar 2024 06:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:49:39 +0000   Fri, 15 Mar 2024 06:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:49:39 +0000   Fri, 15 Mar 2024 06:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:49:39 +0000   Fri, 15 Mar 2024 06:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-763469-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1de1ddbfffcc4d8791192be74156f838
	  System UUID:                1de1ddbf-ffcc-4d87-9119-2be74156f838
	  Boot ID:                    1a81e58c-fb59-44b3-a520-52506bee38b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7j4pn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-proxy-gg57j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m51s                  kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m55s (x5 over 5m57s)  kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m55s (x5 over 5m57s)  kubelet          Node multinode-763469-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m55s (x5 over 5m57s)  kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m47s                  kubelet          Node multinode-763469-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node multinode-763469-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m17s)  kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12s (x5 over 13s)      kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x5 over 13s)      kubelet          Node multinode-763469-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x5 over 13s)      kubelet          Node multinode-763469-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                     node-controller  Node multinode-763469-m03 event: Registered Node multinode-763469-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-763469-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.183729] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.170501] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.270733] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.848588] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.060560] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.527046] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.571650] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:42] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +0.087785] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.186376] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.121856] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +5.088517] kauditd_printk_skb: 60 callbacks suppressed
	[Mar15 06:43] kauditd_printk_skb: 12 callbacks suppressed
	[Mar15 06:48] systemd-fstab-generator[2751]: Ignoring "noauto" option for root device
	[  +0.151156] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.187955] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.148580] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.264421] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +1.669915] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +1.697270] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[  +1.032152] kauditd_printk_skb: 164 callbacks suppressed
	[  +5.137046] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.810497] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.445533] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[ +20.065071] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555] <==
	{"level":"info","ts":"2024-03-15T06:42:02.717097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 2"}
	{"level":"info","ts":"2024-03-15T06:42:02.717155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-15T06:42:02.722468Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:multinode-763469 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:42:02.722559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:42:02.723088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:42:02.723686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:42:02.724118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	{"level":"info","ts":"2024-03-15T06:42:02.724231Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731681Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731735Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.743159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:42:02.743219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-03-15T06:43:46.832867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.558822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1597526901996223042 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" mod_revision:593 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T06:43:46.83332Z","caller":"traceutil/trace.go:171","msg":"trace[1572112730] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"199.606053ms","start":"2024-03-15T06:43:46.633666Z","end":"2024-03-15T06:43:46.833272Z","steps":["trace[1572112730] 'process raft request'  (duration: 26.665163ms)","trace[1572112730] 'compare'  (duration: 171.277086ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T06:46:40.615964Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T06:46:40.616112Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-763469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"]}
	{"level":"warn","ts":"2024-03-15T06:46:40.616305Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.616393Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.694094Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.29:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.694295Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.29:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:46:40.69442Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97e52954629f162b","current-leader-member-id":"97e52954629f162b"}
	{"level":"info","ts":"2024-03-15T06:46:40.697365Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:46:40.697518Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:46:40.697593Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-763469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"]}
	
	
	==> etcd [4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f] <==
	{"level":"info","ts":"2024-03-15T06:48:17.6279Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","added-peer-id":"97e52954629f162b","added-peer-peer-urls":["https://192.168.39.29:2380"]}
	{"level":"info","ts":"2024-03-15T06:48:17.628091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:48:17.628146Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:48:17.631302Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.631424Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.631453Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.644462Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T06:48:17.64612Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"97e52954629f162b","initial-advertise-peer-urls":["https://192.168.39.29:2380"],"listen-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.29:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T06:48:17.650049Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T06:48:17.645863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:48:17.650154Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:48:19.482154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgPreVoteResp from 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.488183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:48:19.488114Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:multinode-763469 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:48:19.489143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:48:19.489703Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	{"level":"info","ts":"2024-03-15T06:48:19.490387Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:48:19.490589Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:48:19.490627Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 06:49:43 up 8 min,  0 users,  load average: 0.52, 0.33, 0.16
	Linux multinode-763469 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0] <==
	I0315 06:49:03.029479       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:49:13.042130       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:49:13.042340       1 main.go:227] handling current node
	I0315 06:49:13.042396       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:49:13.042429       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:49:13.042600       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:49:13.042642       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:49:23.048887       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:49:23.048943       1 main.go:227] handling current node
	I0315 06:49:23.048959       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:49:23.048967       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:49:23.049142       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:49:23.049182       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:49:33.063603       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:49:33.063662       1 main.go:227] handling current node
	I0315 06:49:33.063687       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:49:33.063696       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:49:33.064966       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:49:33.065060       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.2.0/24] 
	I0315 06:49:43.070599       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:49:43.070650       1 main.go:227] handling current node
	I0315 06:49:43.070662       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:49:43.070667       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:49:43.070844       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:49:43.070872       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0] <==
	I0315 06:45:55.494005       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:05.499554       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:05.499621       1 main.go:227] handling current node
	I0315 06:46:05.499633       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:05.499645       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:05.499856       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:05.499882       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:15.509503       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:15.509628       1 main.go:227] handling current node
	I0315 06:46:15.509646       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:15.509653       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:15.509912       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:15.509991       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:25.524143       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:25.524196       1 main.go:227] handling current node
	I0315 06:46:25.524215       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:25.524230       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:25.524350       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:25.524377       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:35.529383       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:35.529485       1 main.go:227] handling current node
	I0315 06:46:35.529514       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:35.529532       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:35.529756       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:35.529857       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901] <==
	I0315 06:48:20.932089       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 06:48:20.932130       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:48:20.932208       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:48:21.058570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0315 06:48:21.066090       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 06:48:21.126019       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 06:48:21.126592       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 06:48:21.126679       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 06:48:21.126918       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 06:48:21.130255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 06:48:21.130286       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 06:48:21.132448       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 06:48:21.132877       1 aggregator.go:166] initial CRD sync complete...
	I0315 06:48:21.132919       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 06:48:21.132925       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 06:48:21.132931       1 cache.go:39] Caches are synced for autoregister controller
	E0315 06:48:21.143258       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0315 06:48:21.936186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 06:48:23.850205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 06:48:23.971964       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0315 06:48:23.980620       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0315 06:48:24.060525       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 06:48:24.071147       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 06:48:34.118091       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 06:48:34.165584       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe] <==
	I0315 06:42:07.477422       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 06:42:20.071275       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0315 06:42:20.112027       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0315 06:46:40.610980       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0315 06:46:40.636276       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.636716       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637322       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637395       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637423       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637460       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637489       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637543       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637605       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637633       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637666       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637692       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637850       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637881       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637909       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637933       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637965       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637993       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.638025       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.641293       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.643453       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7] <==
	I0315 06:48:56.231598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.161613ms"
	I0315 06:48:56.231836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="104.775µs"
	I0315 06:49:01.939904       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m02\" does not exist"
	I0315 06:49:01.941503       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ktsnt" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ktsnt"
	I0315 06:49:01.956902       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m02" podCIDRs=["10.244.1.0/24"]
	I0315 06:49:02.414922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="170.838µs"
	I0315 06:49:02.481959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.693µs"
	I0315 06:49:02.489197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="75.263µs"
	I0315 06:49:02.499999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.332µs"
	I0315 06:49:02.511565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.558µs"
	I0315 06:49:02.518109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.494µs"
	I0315 06:49:04.303396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.665µs"
	I0315 06:49:09.196846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:09.218296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.518µs"
	I0315 06:49:09.234041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.778µs"
	I0315 06:49:12.758382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.31525ms"
	I0315 06:49:12.758483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.717µs"
	I0315 06:49:14.180544       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pk8lw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-pk8lw"
	I0315 06:49:28.841323       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:29.183920       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-763469-m03 event: Removing Node multinode-763469-m03 from Controller"
	I0315 06:49:31.886521       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:49:31.887217       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:31.936734       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.2.0/24"]
	I0315 06:49:34.184705       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-763469-m03 event: Registered Node multinode-763469-m03 in Controller"
	I0315 06:49:39.584936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	
	
	==> kube-controller-manager [68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948] <==
	I0315 06:43:15.959005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.230031ms"
	I0315 06:43:15.959217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="88.09µs"
	I0315 06:43:48.083249       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:43:48.084715       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:43:48.122654       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gg57j"
	I0315 06:43:48.132045       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7j4pn"
	I0315 06:43:48.147669       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.2.0/24"]
	I0315 06:43:50.043065       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-763469-m03"
	I0315 06:43:50.043365       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-763469-m03 event: Registered Node multinode-763469-m03 in Controller"
	I0315 06:43:56.480647       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:25.873419       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:28.351506       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:44:28.353596       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:28.365696       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.3.0/24"]
	I0315 06:44:35.097150       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:45:15.097991       1 event.go:307] "Event occurred" object="multinode-763469-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-763469-m02 status is now: NodeNotReady"
	I0315 06:45:15.097991       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m03"
	I0315 06:45:15.113672       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-b8jmp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.128157       1 event.go:307] "Event occurred" object="kube-system/kindnet-zfcwm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.149287       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ktsnt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.155969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.577126ms"
	I0315 06:45:15.156438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.555µs"
	I0315 06:45:20.160001       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-763469-m03 status is now: NodeNotReady"
	I0315 06:45:20.171507       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-gg57j" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:20.186267       1 event.go:307] "Event occurred" object="kube-system/kindnet-7j4pn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc] <==
	I0315 06:48:22.381854       1 server_others.go:69] "Using iptables proxy"
	I0315 06:48:22.432722       1 node.go:141] Successfully retrieved node IP: 192.168.39.29
	I0315 06:48:22.515752       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:48:22.515926       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:48:22.519374       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:48:22.519490       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:48:22.519825       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:48:22.520540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:48:22.521498       1 config.go:188] "Starting service config controller"
	I0315 06:48:22.521629       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:48:22.521687       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:48:22.521706       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:48:22.522319       1 config.go:315] "Starting node config controller"
	I0315 06:48:22.522873       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:48:22.621878       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:48:22.621963       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:48:22.622964       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c] <==
	I0315 06:42:21.208019       1 server_others.go:69] "Using iptables proxy"
	I0315 06:42:21.270636       1 node.go:141] Successfully retrieved node IP: 192.168.39.29
	I0315 06:42:21.366306       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:42:21.366327       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:42:21.373887       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:42:21.375137       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:42:21.375967       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:42:21.375983       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:42:21.378287       1 config.go:188] "Starting service config controller"
	I0315 06:42:21.379114       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:42:21.379153       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:42:21.379159       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:42:21.380394       1 config.go:315] "Starting node config controller"
	I0315 06:42:21.380402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:42:21.479666       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:42:21.479804       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:42:21.480714       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886] <==
	E0315 06:42:04.299227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:42:04.298403       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:42:04.299340       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:42:04.299464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:42:04.299566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:42:05.136319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:42:05.136371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:42:05.137580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:42:05.137600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:42:05.141914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:42:05.141957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:42:05.166246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:42:05.166294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:42:05.241160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:42:05.241298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:42:05.291238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:42:05.291380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:42:05.377246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:42:05.377382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:42:05.793414       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:42:05.793672       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0315 06:42:08.988496       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 06:46:40.633059       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:46:40.633167       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:46:40.633483       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0] <==
	I0315 06:48:18.538453       1 serving.go:348] Generated self-signed cert in-memory
	W0315 06:48:21.028335       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 06:48:21.028451       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:48:21.028464       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 06:48:21.028471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 06:48:21.070014       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0315 06:48:21.070057       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:48:21.072525       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 06:48:21.072731       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 06:48:21.072834       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 06:48:21.072881       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 06:48:21.173470       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.413729    3048 topology_manager.go:215] "Topology Admit Handler" podUID="17b13912-e637-4f97-9f58-16a39483c91e" podNamespace="kube-system" podName="coredns-5dd5756b68-x6j8r"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.413896    3048 topology_manager.go:215] "Topology Admit Handler" podUID="ee5dba32-45a3-44e1-80e2-f585e324cf82" podNamespace="kube-system" podName="kindnet-r6vss"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.414006    3048 topology_manager.go:215] "Topology Admit Handler" podUID="09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569" podNamespace="kube-system" podName="storage-provisioner"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.414086    3048 topology_manager.go:215] "Topology Admit Handler" podUID="4247590d-21e4-4ee1-8989-1cc15ec40318" podNamespace="default" podName="busybox-5b5d89c9d6-tsdl7"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.424481    3048 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.425092    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee5dba32-45a3-44e1-80e2-f585e324cf82-cni-cfg\") pod \"kindnet-r6vss\" (UID: \"ee5dba32-45a3-44e1-80e2-f585e324cf82\") " pod="kube-system/kindnet-r6vss"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.425294    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee5dba32-45a3-44e1-80e2-f585e324cf82-xtables-lock\") pod \"kindnet-r6vss\" (UID: \"ee5dba32-45a3-44e1-80e2-f585e324cf82\") " pod="kube-system/kindnet-r6vss"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.425503    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569-tmp\") pod \"storage-provisioner\" (UID: \"09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569\") " pod="kube-system/storage-provisioner"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.426521    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40630839-8887-4f18-b35c-e4f1f0e3a513-xtables-lock\") pod \"kube-proxy-zbg48\" (UID: \"40630839-8887-4f18-b35c-e4f1f0e3a513\") " pod="kube-system/kube-proxy-zbg48"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.426809    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40630839-8887-4f18-b35c-e4f1f0e3a513-lib-modules\") pod \"kube-proxy-zbg48\" (UID: \"40630839-8887-4f18-b35c-e4f1f0e3a513\") " pod="kube-system/kube-proxy-zbg48"
	Mar 15 06:48:21 multinode-763469 kubelet[3048]: I0315 06:48:21.426877    3048 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee5dba32-45a3-44e1-80e2-f585e324cf82-lib-modules\") pod \"kindnet-r6vss\" (UID: \"ee5dba32-45a3-44e1-80e2-f585e324cf82\") " pod="kube-system/kindnet-r6vss"
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.480172    3048 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:49:16 multinode-763469 kubelet[3048]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:49:16 multinode-763469 kubelet[3048]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:49:16 multinode-763469 kubelet[3048]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:49:16 multinode-763469 kubelet[3048]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.516837    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4247590d-21e4-4ee1-8989-1cc15ec40318/crio-2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Error finding container 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Status 404 returned error can't find the container with id 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.517279    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod40630839-8887-4f18-b35c-e4f1f0e3a513/crio-88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Error finding container 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Status 404 returned error can't find the container with id 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.517641    3048 manager.go:1106] Failed to create existing container: /kubepods/podee5dba32-45a3-44e1-80e2-f585e324cf82/crio-968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Error finding container 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Status 404 returned error can't find the container with id 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.517989    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podb83f83f07ca3131c46707e11d52155c8/crio-7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Error finding container 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Status 404 returned error can't find the container with id 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.518366    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podce435b119544c4c614d66991282e3c51/crio-27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Error finding container 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Status 404 returned error can't find the container with id 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.518585    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod17b13912-e637-4f97-9f58-16a39483c91e/crio-91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Error finding container 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Status 404 returned error can't find the container with id 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.518882    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podddaf1cb0928f1352bca011ce12428363/crio-36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Error finding container 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Status 404 returned error can't find the container with id 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.519211    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podffba728ba0f963033d8e304d674bfb10/crio-9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Error finding container 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Status 404 returned error can't find the container with id 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475
	Mar 15 06:49:16 multinode-763469 kubelet[3048]: E0315 06:49:16.519408    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/crio-dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Error finding container dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Status 404 returned error can't find the container with id dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:49:42.306885   42499 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-763469 -n multinode-763469
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-763469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (306.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 stop
E0315 06:49:58.535069   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-763469 stop: exit status 82 (2m0.478411956s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-763469-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-763469 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-763469 status: exit status 3 (18.803917273s)

                                                
                                                
-- stdout --
	multinode-763469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-763469-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:52:05.924773   43042 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host
	E0315 06:52:05.924809   43042 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.26:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-763469 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-763469 -n multinode-763469
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-763469 logs -n 25: (1.609292766s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469:/home/docker/cp-test_multinode-763469-m02_multinode-763469.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469 sudo cat                                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m02_multinode-763469.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03:/home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469-m03 sudo cat                                   | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp testdata/cp-test.txt                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469:/home/docker/cp-test_multinode-763469-m03_multinode-763469.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469 sudo cat                                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02:/home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469-m02 sudo cat                                   | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-763469 node stop m03                                                          | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	| node    | multinode-763469 node start                                                             | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| stop    | -p multinode-763469                                                                     | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| start   | -p multinode-763469                                                                     | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:46 UTC | 15 Mar 24 06:49 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC |                     |
	| node    | multinode-763469 node delete                                                            | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC | 15 Mar 24 06:49 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-763469 stop                                                                   | multinode-763469 | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:46:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:46:39.708445   41675 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:46:39.708763   41675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:46:39.708778   41675 out.go:304] Setting ErrFile to fd 2...
	I0315 06:46:39.708785   41675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:46:39.709289   41675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:46:39.710238   41675 out.go:298] Setting JSON to false
	I0315 06:46:39.711206   41675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5296,"bootTime":1710479904,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:46:39.711271   41675 start.go:139] virtualization: kvm guest
	I0315 06:46:39.713484   41675 out.go:177] * [multinode-763469] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:46:39.715298   41675 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:46:39.715305   41675 notify.go:220] Checking for updates...
	I0315 06:46:39.717030   41675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:46:39.718720   41675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:46:39.720226   41675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:46:39.721746   41675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:46:39.723228   41675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:46:39.725137   41675 config.go:182] Loaded profile config "multinode-763469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:46:39.725239   41675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:46:39.725609   41675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:46:39.725652   41675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:46:39.740916   41675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0315 06:46:39.741346   41675 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:46:39.741911   41675 main.go:141] libmachine: Using API Version  1
	I0315 06:46:39.741931   41675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:46:39.742267   41675 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:46:39.742432   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.776960   41675 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:46:39.778333   41675 start.go:297] selected driver: kvm2
	I0315 06:46:39.778358   41675 start.go:901] validating driver "kvm2" against &{Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:46:39.778487   41675 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:46:39.778805   41675 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:46:39.778886   41675 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:46:39.793784   41675 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:46:39.794415   41675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:46:39.794473   41675 cni.go:84] Creating CNI manager for ""
	I0315 06:46:39.794484   41675 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:46:39.794551   41675 start.go:340] cluster config:
	{Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:46:39.794667   41675 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:46:39.796479   41675 out.go:177] * Starting "multinode-763469" primary control-plane node in "multinode-763469" cluster
	I0315 06:46:39.798062   41675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:46:39.798118   41675 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:46:39.798128   41675 cache.go:56] Caching tarball of preloaded images
	I0315 06:46:39.798231   41675 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 06:46:39.798247   41675 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 06:46:39.798384   41675 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/config.json ...
	I0315 06:46:39.798595   41675 start.go:360] acquireMachinesLock for multinode-763469: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:46:39.798639   41675 start.go:364] duration metric: took 24.438µs to acquireMachinesLock for "multinode-763469"
	I0315 06:46:39.798657   41675 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:46:39.798666   41675 fix.go:54] fixHost starting: 
	I0315 06:46:39.798909   41675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:46:39.798941   41675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:46:39.813233   41675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0315 06:46:39.813646   41675 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:46:39.814074   41675 main.go:141] libmachine: Using API Version  1
	I0315 06:46:39.814105   41675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:46:39.814400   41675 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:46:39.814584   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.814743   41675 main.go:141] libmachine: (multinode-763469) Calling .GetState
	I0315 06:46:39.816338   41675 fix.go:112] recreateIfNeeded on multinode-763469: state=Running err=<nil>
	W0315 06:46:39.816359   41675 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:46:39.818410   41675 out.go:177] * Updating the running kvm2 "multinode-763469" VM ...
	I0315 06:46:39.819881   41675 machine.go:94] provisionDockerMachine start ...
	I0315 06:46:39.819903   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:46:39.820136   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:39.822667   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.823175   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:39.823210   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.823370   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:39.823568   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.823771   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.823929   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:39.824089   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:39.824275   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:39.824286   41675 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:46:39.946363   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-763469
	
	I0315 06:46:39.946392   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:39.946648   41675 buildroot.go:166] provisioning hostname "multinode-763469"
	I0315 06:46:39.946679   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:39.946944   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:39.950032   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.950498   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:39.950529   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:39.950804   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:39.951065   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.951260   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:39.951405   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:39.951618   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:39.951822   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:39.951837   41675 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-763469 && echo "multinode-763469" | sudo tee /etc/hostname
	I0315 06:46:40.082466   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-763469
	
	I0315 06:46:40.082499   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.085472   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.085849   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.085874   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.086113   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.086335   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.086523   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.086675   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.086852   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:40.087063   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:40.087093   41675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-763469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-763469/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-763469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:46:40.201668   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:46:40.201698   41675 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:46:40.201714   41675 buildroot.go:174] setting up certificates
	I0315 06:46:40.201723   41675 provision.go:84] configureAuth start
	I0315 06:46:40.201731   41675 main.go:141] libmachine: (multinode-763469) Calling .GetMachineName
	I0315 06:46:40.202041   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:46:40.204613   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.205067   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.205100   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.205231   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.207417   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.207823   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.207870   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.207930   41675 provision.go:143] copyHostCerts
	I0315 06:46:40.207969   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:46:40.207997   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:46:40.208005   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:46:40.208082   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:46:40.208161   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:46:40.208177   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:46:40.208184   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:46:40.208208   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:46:40.208260   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:46:40.208276   41675 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:46:40.208282   41675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:46:40.208302   41675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:46:40.208356   41675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.multinode-763469 san=[127.0.0.1 192.168.39.29 localhost minikube multinode-763469]
	I0315 06:46:40.297910   41675 provision.go:177] copyRemoteCerts
	I0315 06:46:40.297968   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:46:40.297995   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.300845   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.301257   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.301301   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.301465   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.301668   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.301819   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.301951   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:46:40.391708   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 06:46:40.391819   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:46:40.420432   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 06:46:40.420514   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0315 06:46:40.447700   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 06:46:40.447777   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 06:46:40.473416   41675 provision.go:87] duration metric: took 271.680903ms to configureAuth
	I0315 06:46:40.473447   41675 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:46:40.473695   41675 config.go:182] Loaded profile config "multinode-763469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:46:40.473763   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:46:40.476339   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.476725   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:46:40.476767   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:46:40.476943   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:46:40.477111   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.477277   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:46:40.477403   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:46:40.477545   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:46:40.477716   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:46:40.477730   41675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:48:11.377134   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:48:11.377183   41675 machine.go:97] duration metric: took 1m31.557288028s to provisionDockerMachine
	I0315 06:48:11.377196   41675 start.go:293] postStartSetup for "multinode-763469" (driver="kvm2")
	I0315 06:48:11.377210   41675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:48:11.377240   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.377687   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:48:11.377722   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.380949   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.381428   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.381452   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.381677   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.381891   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.382065   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.382234   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.473207   41675 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:48:11.477776   41675 command_runner.go:130] > NAME=Buildroot
	I0315 06:48:11.477795   41675 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0315 06:48:11.477799   41675 command_runner.go:130] > ID=buildroot
	I0315 06:48:11.477804   41675 command_runner.go:130] > VERSION_ID=2023.02.9
	I0315 06:48:11.477809   41675 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0315 06:48:11.477836   41675 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:48:11.477851   41675 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:48:11.477909   41675 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:48:11.477984   41675 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:48:11.477994   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /etc/ssl/certs/160752.pem
	I0315 06:48:11.478076   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:48:11.488055   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:48:11.515998   41675 start.go:296] duration metric: took 138.787884ms for postStartSetup
	I0315 06:48:11.516046   41675 fix.go:56] duration metric: took 1m31.717379198s for fixHost
	I0315 06:48:11.516070   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.519119   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.519626   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.519645   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.519961   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.520221   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.520421   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.520587   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.520754   41675 main.go:141] libmachine: Using SSH client type: native
	I0315 06:48:11.520966   41675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0315 06:48:11.520978   41675 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:48:11.637686   41675 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710485291.615874397
	
	I0315 06:48:11.637707   41675 fix.go:216] guest clock: 1710485291.615874397
	I0315 06:48:11.637714   41675 fix.go:229] Guest: 2024-03-15 06:48:11.615874397 +0000 UTC Remote: 2024-03-15 06:48:11.516051552 +0000 UTC m=+91.852898782 (delta=99.822845ms)
	I0315 06:48:11.637746   41675 fix.go:200] guest clock delta is within tolerance: 99.822845ms
	I0315 06:48:11.637756   41675 start.go:83] releasing machines lock for "multinode-763469", held for 1m31.839106152s
	I0315 06:48:11.637779   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.638041   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:48:11.640800   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.641275   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.641299   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.641470   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.641976   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.642149   41675 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:48:11.642268   41675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:48:11.642312   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.642361   41675 ssh_runner.go:195] Run: cat /version.json
	I0315 06:48:11.642384   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:48:11.644932   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645198   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645320   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.645356   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645453   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.645572   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:11.645608   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:11.645622   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.645756   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:48:11.645812   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.645959   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:48:11.645974   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.646102   41675 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:48:11.646239   41675 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:48:11.726123   41675 command_runner.go:130] > {"iso_version": "v1.32.1-1710459732-18213", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3cbf09d91ff419d65a5234008c34d4cc95dfc38f"}
	I0315 06:48:11.726470   41675 ssh_runner.go:195] Run: systemctl --version
	I0315 06:48:11.762745   41675 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0315 06:48:11.762795   41675 command_runner.go:130] > systemd 252 (252)
	I0315 06:48:11.762818   41675 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0315 06:48:11.762867   41675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:48:11.925678   41675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 06:48:11.933138   41675 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0315 06:48:11.933197   41675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:48:11.933252   41675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:48:11.943591   41675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 06:48:11.943616   41675 start.go:494] detecting cgroup driver to use...
	I0315 06:48:11.943729   41675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:48:11.961161   41675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:48:11.976676   41675 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:48:11.976729   41675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:48:11.991562   41675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:48:12.006903   41675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:48:12.162279   41675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:48:12.311667   41675 docker.go:233] disabling docker service ...
	I0315 06:48:12.311725   41675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:48:12.328588   41675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:48:12.344272   41675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:48:12.494952   41675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:48:12.639871   41675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:48:12.654908   41675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:48:12.676978   41675 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0315 06:48:12.677585   41675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 06:48:12.677672   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.689216   41675 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:48:12.689292   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.700695   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.712763   41675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:48:12.724331   41675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:48:12.736300   41675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:48:12.748623   41675 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0315 06:48:12.748743   41675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:48:12.760212   41675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:48:12.906197   41675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:48:14.060527   41675 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.154291615s)
	I0315 06:48:14.060558   41675 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:48:14.060601   41675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:48:14.065804   41675 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0315 06:48:14.065842   41675 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0315 06:48:14.065849   41675 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0315 06:48:14.065855   41675 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 06:48:14.065860   41675 command_runner.go:130] > Access: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065867   41675 command_runner.go:130] > Modify: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065872   41675 command_runner.go:130] > Change: 2024-03-15 06:48:13.931911651 +0000
	I0315 06:48:14.065876   41675 command_runner.go:130] >  Birth: -
	I0315 06:48:14.065948   41675 start.go:562] Will wait 60s for crictl version
	I0315 06:48:14.065988   41675 ssh_runner.go:195] Run: which crictl
	I0315 06:48:14.069735   41675 command_runner.go:130] > /usr/bin/crictl
	I0315 06:48:14.069950   41675 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:48:14.114689   41675 command_runner.go:130] > Version:  0.1.0
	I0315 06:48:14.114711   41675 command_runner.go:130] > RuntimeName:  cri-o
	I0315 06:48:14.114716   41675 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0315 06:48:14.114721   41675 command_runner.go:130] > RuntimeApiVersion:  v1
	I0315 06:48:14.114740   41675 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:48:14.114806   41675 ssh_runner.go:195] Run: crio --version
	I0315 06:48:14.145548   41675 command_runner.go:130] > crio version 1.29.1
	I0315 06:48:14.145573   41675 command_runner.go:130] > Version:        1.29.1
	I0315 06:48:14.145581   41675 command_runner.go:130] > GitCommit:      unknown
	I0315 06:48:14.145588   41675 command_runner.go:130] > GitCommitDate:  unknown
	I0315 06:48:14.145594   41675 command_runner.go:130] > GitTreeState:   clean
	I0315 06:48:14.145606   41675 command_runner.go:130] > BuildDate:      2024-03-15T05:02:11Z
	I0315 06:48:14.145610   41675 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 06:48:14.145614   41675 command_runner.go:130] > Compiler:       gc
	I0315 06:48:14.145619   41675 command_runner.go:130] > Platform:       linux/amd64
	I0315 06:48:14.145624   41675 command_runner.go:130] > Linkmode:       dynamic
	I0315 06:48:14.145628   41675 command_runner.go:130] > BuildTags:      
	I0315 06:48:14.145633   41675 command_runner.go:130] >   containers_image_ostree_stub
	I0315 06:48:14.145638   41675 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 06:48:14.145648   41675 command_runner.go:130] >   btrfs_noversion
	I0315 06:48:14.145656   41675 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 06:48:14.145661   41675 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 06:48:14.145666   41675 command_runner.go:130] >   seccomp
	I0315 06:48:14.145670   41675 command_runner.go:130] > LDFlags:          unknown
	I0315 06:48:14.145676   41675 command_runner.go:130] > SeccompEnabled:   true
	I0315 06:48:14.145680   41675 command_runner.go:130] > AppArmorEnabled:  false
	I0315 06:48:14.145748   41675 ssh_runner.go:195] Run: crio --version
	I0315 06:48:14.175809   41675 command_runner.go:130] > crio version 1.29.1
	I0315 06:48:14.175831   41675 command_runner.go:130] > Version:        1.29.1
	I0315 06:48:14.175836   41675 command_runner.go:130] > GitCommit:      unknown
	I0315 06:48:14.175840   41675 command_runner.go:130] > GitCommitDate:  unknown
	I0315 06:48:14.175844   41675 command_runner.go:130] > GitTreeState:   clean
	I0315 06:48:14.175861   41675 command_runner.go:130] > BuildDate:      2024-03-15T05:02:11Z
	I0315 06:48:14.175865   41675 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 06:48:14.175869   41675 command_runner.go:130] > Compiler:       gc
	I0315 06:48:14.175873   41675 command_runner.go:130] > Platform:       linux/amd64
	I0315 06:48:14.175877   41675 command_runner.go:130] > Linkmode:       dynamic
	I0315 06:48:14.175881   41675 command_runner.go:130] > BuildTags:      
	I0315 06:48:14.175885   41675 command_runner.go:130] >   containers_image_ostree_stub
	I0315 06:48:14.175889   41675 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 06:48:14.175893   41675 command_runner.go:130] >   btrfs_noversion
	I0315 06:48:14.175897   41675 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 06:48:14.175902   41675 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 06:48:14.175906   41675 command_runner.go:130] >   seccomp
	I0315 06:48:14.175910   41675 command_runner.go:130] > LDFlags:          unknown
	I0315 06:48:14.175913   41675 command_runner.go:130] > SeccompEnabled:   true
	I0315 06:48:14.175917   41675 command_runner.go:130] > AppArmorEnabled:  false
	I0315 06:48:14.179096   41675 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 06:48:14.180333   41675 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:48:14.182765   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:14.183168   41675 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:48:14.183198   41675 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:48:14.183401   41675 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:48:14.187765   41675 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0315 06:48:14.187855   41675 kubeadm.go:877] updating cluster {Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:48:14.188019   41675 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 06:48:14.188076   41675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:48:14.233784   41675 command_runner.go:130] > {
	I0315 06:48:14.233824   41675 command_runner.go:130] >   "images": [
	I0315 06:48:14.233831   41675 command_runner.go:130] >     {
	I0315 06:48:14.233842   41675 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 06:48:14.233856   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.233865   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 06:48:14.233875   41675 command_runner.go:130] >       ],
	I0315 06:48:14.233882   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.233895   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 06:48:14.233909   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 06:48:14.233915   41675 command_runner.go:130] >       ],
	I0315 06:48:14.233925   41675 command_runner.go:130] >       "size": "65258016",
	I0315 06:48:14.233931   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.233941   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.233948   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.233954   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.233960   41675 command_runner.go:130] >     },
	I0315 06:48:14.233965   41675 command_runner.go:130] >     {
	I0315 06:48:14.233974   41675 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 06:48:14.233982   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.233990   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 06:48:14.234003   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234010   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234020   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 06:48:14.234031   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 06:48:14.234034   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234041   41675 command_runner.go:130] >       "size": "65291810",
	I0315 06:48:14.234046   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234055   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234059   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234063   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234069   41675 command_runner.go:130] >     },
	I0315 06:48:14.234072   41675 command_runner.go:130] >     {
	I0315 06:48:14.234078   41675 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 06:48:14.234083   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234089   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 06:48:14.234095   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234099   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234115   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 06:48:14.234125   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 06:48:14.234129   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234133   41675 command_runner.go:130] >       "size": "1363676",
	I0315 06:48:14.234137   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234144   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234148   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234152   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234156   41675 command_runner.go:130] >     },
	I0315 06:48:14.234161   41675 command_runner.go:130] >     {
	I0315 06:48:14.234167   41675 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 06:48:14.234171   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234177   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 06:48:14.234189   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234198   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234212   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 06:48:14.234240   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 06:48:14.234249   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234256   41675 command_runner.go:130] >       "size": "31470524",
	I0315 06:48:14.234273   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234283   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234289   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234294   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234298   41675 command_runner.go:130] >     },
	I0315 06:48:14.234304   41675 command_runner.go:130] >     {
	I0315 06:48:14.234310   41675 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 06:48:14.234314   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234320   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 06:48:14.234325   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234329   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234337   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 06:48:14.234346   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 06:48:14.234350   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234354   41675 command_runner.go:130] >       "size": "53621675",
	I0315 06:48:14.234360   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234363   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234367   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234373   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234377   41675 command_runner.go:130] >     },
	I0315 06:48:14.234380   41675 command_runner.go:130] >     {
	I0315 06:48:14.234386   41675 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 06:48:14.234392   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234396   41675 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 06:48:14.234399   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234403   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234410   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 06:48:14.234419   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 06:48:14.234425   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234430   41675 command_runner.go:130] >       "size": "295456551",
	I0315 06:48:14.234436   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234440   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234443   41675 command_runner.go:130] >       },
	I0315 06:48:14.234450   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234453   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234460   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234468   41675 command_runner.go:130] >     },
	I0315 06:48:14.234473   41675 command_runner.go:130] >     {
	I0315 06:48:14.234479   41675 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 06:48:14.234483   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234488   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 06:48:14.234494   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234497   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234504   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 06:48:14.234513   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 06:48:14.234517   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234521   41675 command_runner.go:130] >       "size": "127226832",
	I0315 06:48:14.234526   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234530   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234536   41675 command_runner.go:130] >       },
	I0315 06:48:14.234540   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234544   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234551   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234554   41675 command_runner.go:130] >     },
	I0315 06:48:14.234557   41675 command_runner.go:130] >     {
	I0315 06:48:14.234563   41675 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 06:48:14.234569   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234575   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 06:48:14.234580   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234584   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234605   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 06:48:14.234616   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 06:48:14.234620   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234624   41675 command_runner.go:130] >       "size": "123261750",
	I0315 06:48:14.234627   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234631   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234635   41675 command_runner.go:130] >       },
	I0315 06:48:14.234639   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234645   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234649   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234653   41675 command_runner.go:130] >     },
	I0315 06:48:14.234656   41675 command_runner.go:130] >     {
	I0315 06:48:14.234668   41675 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 06:48:14.234674   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234679   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 06:48:14.234685   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234689   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234696   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 06:48:14.234703   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 06:48:14.234706   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234710   41675 command_runner.go:130] >       "size": "74749335",
	I0315 06:48:14.234713   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.234717   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234720   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234723   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234726   41675 command_runner.go:130] >     },
	I0315 06:48:14.234729   41675 command_runner.go:130] >     {
	I0315 06:48:14.234735   41675 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 06:48:14.234739   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234743   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 06:48:14.234747   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234751   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234760   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 06:48:14.234769   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 06:48:14.234773   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234777   41675 command_runner.go:130] >       "size": "61551410",
	I0315 06:48:14.234781   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234784   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.234787   41675 command_runner.go:130] >       },
	I0315 06:48:14.234791   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234795   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234801   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.234805   41675 command_runner.go:130] >     },
	I0315 06:48:14.234808   41675 command_runner.go:130] >     {
	I0315 06:48:14.234814   41675 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 06:48:14.234818   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.234823   41675 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 06:48:14.234827   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234835   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.234845   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 06:48:14.234852   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 06:48:14.234857   41675 command_runner.go:130] >       ],
	I0315 06:48:14.234861   41675 command_runner.go:130] >       "size": "750414",
	I0315 06:48:14.234867   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.234871   41675 command_runner.go:130] >         "value": "65535"
	I0315 06:48:14.234874   41675 command_runner.go:130] >       },
	I0315 06:48:14.234878   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.234882   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.234886   41675 command_runner.go:130] >       "pinned": true
	I0315 06:48:14.234889   41675 command_runner.go:130] >     }
	I0315 06:48:14.234892   41675 command_runner.go:130] >   ]
	I0315 06:48:14.234895   41675 command_runner.go:130] > }
	I0315 06:48:14.235053   41675 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:48:14.235064   41675 crio.go:415] Images already preloaded, skipping extraction
	I0315 06:48:14.235117   41675 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:48:14.278682   41675 command_runner.go:130] > {
	I0315 06:48:14.278706   41675 command_runner.go:130] >   "images": [
	I0315 06:48:14.278712   41675 command_runner.go:130] >     {
	I0315 06:48:14.278723   41675 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 06:48:14.278731   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278746   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 06:48:14.278751   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278759   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.278771   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 06:48:14.278785   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 06:48:14.278794   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278803   41675 command_runner.go:130] >       "size": "65258016",
	I0315 06:48:14.278810   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.278819   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.278835   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.278845   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.278849   41675 command_runner.go:130] >     },
	I0315 06:48:14.278858   41675 command_runner.go:130] >     {
	I0315 06:48:14.278876   41675 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 06:48:14.278886   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278894   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 06:48:14.278902   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278912   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.278925   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 06:48:14.278937   41675 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 06:48:14.278946   41675 command_runner.go:130] >       ],
	I0315 06:48:14.278952   41675 command_runner.go:130] >       "size": "65291810",
	I0315 06:48:14.278959   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.278967   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.278971   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.278975   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.278978   41675 command_runner.go:130] >     },
	I0315 06:48:14.278981   41675 command_runner.go:130] >     {
	I0315 06:48:14.278987   41675 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 06:48:14.278991   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.278996   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 06:48:14.279003   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279007   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279016   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 06:48:14.279025   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 06:48:14.279031   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279035   41675 command_runner.go:130] >       "size": "1363676",
	I0315 06:48:14.279042   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279046   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279054   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279063   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279070   41675 command_runner.go:130] >     },
	I0315 06:48:14.279073   41675 command_runner.go:130] >     {
	I0315 06:48:14.279078   41675 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 06:48:14.279084   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279089   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 06:48:14.279093   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279097   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279104   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 06:48:14.279135   41675 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 06:48:14.279141   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279145   41675 command_runner.go:130] >       "size": "31470524",
	I0315 06:48:14.279150   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279157   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279164   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279168   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279171   41675 command_runner.go:130] >     },
	I0315 06:48:14.279174   41675 command_runner.go:130] >     {
	I0315 06:48:14.279182   41675 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 06:48:14.279191   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279198   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 06:48:14.279208   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279214   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279230   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 06:48:14.279245   41675 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 06:48:14.279254   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279260   41675 command_runner.go:130] >       "size": "53621675",
	I0315 06:48:14.279268   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279275   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279281   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279285   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279288   41675 command_runner.go:130] >     },
	I0315 06:48:14.279292   41675 command_runner.go:130] >     {
	I0315 06:48:14.279298   41675 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 06:48:14.279302   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279310   41675 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 06:48:14.279316   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279320   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279328   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 06:48:14.279337   41675 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 06:48:14.279340   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279344   41675 command_runner.go:130] >       "size": "295456551",
	I0315 06:48:14.279348   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279352   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279358   41675 command_runner.go:130] >       },
	I0315 06:48:14.279366   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279370   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279376   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279380   41675 command_runner.go:130] >     },
	I0315 06:48:14.279383   41675 command_runner.go:130] >     {
	I0315 06:48:14.279389   41675 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 06:48:14.279395   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279400   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 06:48:14.279406   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279410   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279419   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 06:48:14.279429   41675 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 06:48:14.279435   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279439   41675 command_runner.go:130] >       "size": "127226832",
	I0315 06:48:14.279442   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279446   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279452   41675 command_runner.go:130] >       },
	I0315 06:48:14.279456   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279460   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279466   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279470   41675 command_runner.go:130] >     },
	I0315 06:48:14.279473   41675 command_runner.go:130] >     {
	I0315 06:48:14.279479   41675 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 06:48:14.279485   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279490   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 06:48:14.279494   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279498   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279515   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 06:48:14.279525   41675 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 06:48:14.279528   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279532   41675 command_runner.go:130] >       "size": "123261750",
	I0315 06:48:14.279536   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279540   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279543   41675 command_runner.go:130] >       },
	I0315 06:48:14.279547   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279551   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279562   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279568   41675 command_runner.go:130] >     },
	I0315 06:48:14.279570   41675 command_runner.go:130] >     {
	I0315 06:48:14.279580   41675 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 06:48:14.279586   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279591   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 06:48:14.279597   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279601   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279609   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 06:48:14.279618   41675 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 06:48:14.279624   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279629   41675 command_runner.go:130] >       "size": "74749335",
	I0315 06:48:14.279633   41675 command_runner.go:130] >       "uid": null,
	I0315 06:48:14.279640   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279643   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279647   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279652   41675 command_runner.go:130] >     },
	I0315 06:48:14.279656   41675 command_runner.go:130] >     {
	I0315 06:48:14.279664   41675 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 06:48:14.279668   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279675   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 06:48:14.279678   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279682   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279689   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 06:48:14.279699   41675 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 06:48:14.279705   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279708   41675 command_runner.go:130] >       "size": "61551410",
	I0315 06:48:14.279712   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279716   41675 command_runner.go:130] >         "value": "0"
	I0315 06:48:14.279722   41675 command_runner.go:130] >       },
	I0315 06:48:14.279726   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279732   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279735   41675 command_runner.go:130] >       "pinned": false
	I0315 06:48:14.279738   41675 command_runner.go:130] >     },
	I0315 06:48:14.279744   41675 command_runner.go:130] >     {
	I0315 06:48:14.279752   41675 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 06:48:14.279760   41675 command_runner.go:130] >       "repoTags": [
	I0315 06:48:14.279767   41675 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 06:48:14.279771   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279777   41675 command_runner.go:130] >       "repoDigests": [
	I0315 06:48:14.279783   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 06:48:14.279792   41675 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 06:48:14.279796   41675 command_runner.go:130] >       ],
	I0315 06:48:14.279800   41675 command_runner.go:130] >       "size": "750414",
	I0315 06:48:14.279803   41675 command_runner.go:130] >       "uid": {
	I0315 06:48:14.279807   41675 command_runner.go:130] >         "value": "65535"
	I0315 06:48:14.279813   41675 command_runner.go:130] >       },
	I0315 06:48:14.279817   41675 command_runner.go:130] >       "username": "",
	I0315 06:48:14.279822   41675 command_runner.go:130] >       "spec": null,
	I0315 06:48:14.279826   41675 command_runner.go:130] >       "pinned": true
	I0315 06:48:14.279832   41675 command_runner.go:130] >     }
	I0315 06:48:14.279835   41675 command_runner.go:130] >   ]
	I0315 06:48:14.279838   41675 command_runner.go:130] > }
	I0315 06:48:14.279984   41675 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 06:48:14.279999   41675 cache_images.go:84] Images are preloaded, skipping loading
	I0315 06:48:14.280005   41675 kubeadm.go:928] updating node { 192.168.39.29 8443 v1.28.4 crio true true} ...
	I0315 06:48:14.280095   41675 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-763469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:48:14.280163   41675 ssh_runner.go:195] Run: crio config
	I0315 06:48:14.324997   41675 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0315 06:48:14.325028   41675 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0315 06:48:14.325039   41675 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0315 06:48:14.325045   41675 command_runner.go:130] > #
	I0315 06:48:14.325060   41675 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0315 06:48:14.325068   41675 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0315 06:48:14.325077   41675 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0315 06:48:14.325088   41675 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0315 06:48:14.325095   41675 command_runner.go:130] > # reload'.
	I0315 06:48:14.325104   41675 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0315 06:48:14.325116   41675 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0315 06:48:14.325128   41675 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0315 06:48:14.325137   41675 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0315 06:48:14.325157   41675 command_runner.go:130] > [crio]
	I0315 06:48:14.325166   41675 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0315 06:48:14.325178   41675 command_runner.go:130] > # containers images, in this directory.
	I0315 06:48:14.325185   41675 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0315 06:48:14.325199   41675 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0315 06:48:14.325209   41675 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0315 06:48:14.325220   41675 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0315 06:48:14.325236   41675 command_runner.go:130] > # imagestore = ""
	I0315 06:48:14.325248   41675 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0315 06:48:14.325261   41675 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0315 06:48:14.325274   41675 command_runner.go:130] > storage_driver = "overlay"
	I0315 06:48:14.325288   41675 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0315 06:48:14.325300   41675 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0315 06:48:14.325310   41675 command_runner.go:130] > storage_option = [
	I0315 06:48:14.325320   41675 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0315 06:48:14.325328   41675 command_runner.go:130] > ]
	I0315 06:48:14.325340   41675 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0315 06:48:14.325356   41675 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0315 06:48:14.325367   41675 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0315 06:48:14.325375   41675 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0315 06:48:14.325387   41675 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0315 06:48:14.325393   41675 command_runner.go:130] > # always happen on a node reboot
	I0315 06:48:14.325404   41675 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0315 06:48:14.325425   41675 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0315 06:48:14.325440   41675 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0315 06:48:14.325448   41675 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0315 06:48:14.325459   41675 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0315 06:48:14.325470   41675 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0315 06:48:14.325485   41675 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0315 06:48:14.325496   41675 command_runner.go:130] > # internal_wipe = true
	I0315 06:48:14.325507   41675 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0315 06:48:14.325519   41675 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0315 06:48:14.325531   41675 command_runner.go:130] > # internal_repair = false
	I0315 06:48:14.325543   41675 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0315 06:48:14.325557   41675 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0315 06:48:14.325566   41675 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0315 06:48:14.325586   41675 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0315 06:48:14.325599   41675 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0315 06:48:14.325608   41675 command_runner.go:130] > [crio.api]
	I0315 06:48:14.325618   41675 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0315 06:48:14.325628   41675 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0315 06:48:14.325636   41675 command_runner.go:130] > # IP address on which the stream server will listen.
	I0315 06:48:14.325645   41675 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0315 06:48:14.325655   41675 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0315 06:48:14.325665   41675 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0315 06:48:14.325671   41675 command_runner.go:130] > # stream_port = "0"
	I0315 06:48:14.325681   41675 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0315 06:48:14.325690   41675 command_runner.go:130] > # stream_enable_tls = false
	I0315 06:48:14.325699   41675 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0315 06:48:14.325709   41675 command_runner.go:130] > # stream_idle_timeout = ""
	I0315 06:48:14.325718   41675 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0315 06:48:14.325729   41675 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0315 06:48:14.325738   41675 command_runner.go:130] > # minutes.
	I0315 06:48:14.325744   41675 command_runner.go:130] > # stream_tls_cert = ""
	I0315 06:48:14.325761   41675 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0315 06:48:14.325772   41675 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0315 06:48:14.325779   41675 command_runner.go:130] > # stream_tls_key = ""
	I0315 06:48:14.325789   41675 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0315 06:48:14.325801   41675 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0315 06:48:14.325837   41675 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0315 06:48:14.325848   41675 command_runner.go:130] > # stream_tls_ca = ""
	I0315 06:48:14.325857   41675 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 06:48:14.325863   41675 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0315 06:48:14.325872   41675 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 06:48:14.325882   41675 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0315 06:48:14.325891   41675 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0315 06:48:14.325902   41675 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0315 06:48:14.325911   41675 command_runner.go:130] > [crio.runtime]
	I0315 06:48:14.325921   41675 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0315 06:48:14.325942   41675 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0315 06:48:14.325952   41675 command_runner.go:130] > # "nofile=1024:2048"
	I0315 06:48:14.325960   41675 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0315 06:48:14.325975   41675 command_runner.go:130] > # default_ulimits = [
	I0315 06:48:14.325982   41675 command_runner.go:130] > # ]
	I0315 06:48:14.325993   41675 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0315 06:48:14.326004   41675 command_runner.go:130] > # no_pivot = false
	I0315 06:48:14.326015   41675 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0315 06:48:14.326024   41675 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0315 06:48:14.326035   41675 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0315 06:48:14.326043   41675 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0315 06:48:14.326062   41675 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0315 06:48:14.326076   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 06:48:14.326086   41675 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0315 06:48:14.326098   41675 command_runner.go:130] > # Cgroup setting for conmon
	I0315 06:48:14.326111   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0315 06:48:14.326117   41675 command_runner.go:130] > conmon_cgroup = "pod"
	I0315 06:48:14.326129   41675 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0315 06:48:14.326138   41675 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0315 06:48:14.326150   41675 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 06:48:14.326160   41675 command_runner.go:130] > conmon_env = [
	I0315 06:48:14.326171   41675 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 06:48:14.326179   41675 command_runner.go:130] > ]
	I0315 06:48:14.326189   41675 command_runner.go:130] > # Additional environment variables to set for all the
	I0315 06:48:14.326199   41675 command_runner.go:130] > # containers. These are overridden if set in the
	I0315 06:48:14.326208   41675 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0315 06:48:14.326217   41675 command_runner.go:130] > # default_env = [
	I0315 06:48:14.326222   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326239   41675 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0315 06:48:14.326252   41675 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0315 06:48:14.326260   41675 command_runner.go:130] > # selinux = false
	I0315 06:48:14.326269   41675 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0315 06:48:14.326279   41675 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0315 06:48:14.326294   41675 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0315 06:48:14.326302   41675 command_runner.go:130] > # seccomp_profile = ""
	I0315 06:48:14.326310   41675 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0315 06:48:14.326320   41675 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0315 06:48:14.326330   41675 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0315 06:48:14.326340   41675 command_runner.go:130] > # which might increase security.
	I0315 06:48:14.326358   41675 command_runner.go:130] > # This option is currently deprecated,
	I0315 06:48:14.326370   41675 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0315 06:48:14.326377   41675 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0315 06:48:14.326390   41675 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0315 06:48:14.326399   41675 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0315 06:48:14.326412   41675 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0315 06:48:14.326424   41675 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0315 06:48:14.326432   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.326444   41675 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0315 06:48:14.326457   41675 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0315 06:48:14.326467   41675 command_runner.go:130] > # the cgroup blockio controller.
	I0315 06:48:14.326474   41675 command_runner.go:130] > # blockio_config_file = ""
	I0315 06:48:14.326484   41675 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0315 06:48:14.326493   41675 command_runner.go:130] > # blockio parameters.
	I0315 06:48:14.326499   41675 command_runner.go:130] > # blockio_reload = false
	I0315 06:48:14.326512   41675 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0315 06:48:14.326520   41675 command_runner.go:130] > # irqbalance daemon.
	I0315 06:48:14.326528   41675 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0315 06:48:14.326540   41675 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0315 06:48:14.326553   41675 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0315 06:48:14.326566   41675 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0315 06:48:14.326584   41675 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0315 06:48:14.326597   41675 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0315 06:48:14.326607   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.326616   41675 command_runner.go:130] > # rdt_config_file = ""
	I0315 06:48:14.326625   41675 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0315 06:48:14.326634   41675 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0315 06:48:14.326677   41675 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0315 06:48:14.326687   41675 command_runner.go:130] > # separate_pull_cgroup = ""
	I0315 06:48:14.326696   41675 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0315 06:48:14.326707   41675 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0315 06:48:14.326713   41675 command_runner.go:130] > # will be added.
	I0315 06:48:14.326720   41675 command_runner.go:130] > # default_capabilities = [
	I0315 06:48:14.326725   41675 command_runner.go:130] > # 	"CHOWN",
	I0315 06:48:14.326733   41675 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0315 06:48:14.326742   41675 command_runner.go:130] > # 	"FSETID",
	I0315 06:48:14.326755   41675 command_runner.go:130] > # 	"FOWNER",
	I0315 06:48:14.326764   41675 command_runner.go:130] > # 	"SETGID",
	I0315 06:48:14.326769   41675 command_runner.go:130] > # 	"SETUID",
	I0315 06:48:14.326778   41675 command_runner.go:130] > # 	"SETPCAP",
	I0315 06:48:14.326784   41675 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0315 06:48:14.326793   41675 command_runner.go:130] > # 	"KILL",
	I0315 06:48:14.326797   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326808   41675 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0315 06:48:14.326817   41675 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0315 06:48:14.326828   41675 command_runner.go:130] > # add_inheritable_capabilities = false
	I0315 06:48:14.326841   41675 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0315 06:48:14.326852   41675 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 06:48:14.326859   41675 command_runner.go:130] > # default_sysctls = [
	I0315 06:48:14.326867   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326874   41675 command_runner.go:130] > # List of devices on the host that a
	I0315 06:48:14.326887   41675 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0315 06:48:14.326897   41675 command_runner.go:130] > # allowed_devices = [
	I0315 06:48:14.326903   41675 command_runner.go:130] > # 	"/dev/fuse",
	I0315 06:48:14.326909   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326916   41675 command_runner.go:130] > # List of additional devices. specified as
	I0315 06:48:14.326926   41675 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0315 06:48:14.326936   41675 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0315 06:48:14.326949   41675 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 06:48:14.326958   41675 command_runner.go:130] > # additional_devices = [
	I0315 06:48:14.326963   41675 command_runner.go:130] > # ]
	I0315 06:48:14.326976   41675 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0315 06:48:14.326981   41675 command_runner.go:130] > # cdi_spec_dirs = [
	I0315 06:48:14.326986   41675 command_runner.go:130] > # 	"/etc/cdi",
	I0315 06:48:14.326996   41675 command_runner.go:130] > # 	"/var/run/cdi",
	I0315 06:48:14.327002   41675 command_runner.go:130] > # ]
	I0315 06:48:14.327011   41675 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0315 06:48:14.327024   41675 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0315 06:48:14.327030   41675 command_runner.go:130] > # Defaults to false.
	I0315 06:48:14.327038   41675 command_runner.go:130] > # device_ownership_from_security_context = false
	I0315 06:48:14.327047   41675 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0315 06:48:14.327059   41675 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0315 06:48:14.327076   41675 command_runner.go:130] > # hooks_dir = [
	I0315 06:48:14.327086   41675 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0315 06:48:14.327092   41675 command_runner.go:130] > # ]
	I0315 06:48:14.327101   41675 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0315 06:48:14.327113   41675 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0315 06:48:14.327121   41675 command_runner.go:130] > # its default mounts from the following two files:
	I0315 06:48:14.327129   41675 command_runner.go:130] > #
	I0315 06:48:14.327139   41675 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0315 06:48:14.327150   41675 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0315 06:48:14.327161   41675 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0315 06:48:14.327168   41675 command_runner.go:130] > #
	I0315 06:48:14.327178   41675 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0315 06:48:14.327191   41675 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0315 06:48:14.327201   41675 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0315 06:48:14.327212   41675 command_runner.go:130] > #      only add mounts it finds in this file.
	I0315 06:48:14.327220   41675 command_runner.go:130] > #
	I0315 06:48:14.327226   41675 command_runner.go:130] > # default_mounts_file = ""
	I0315 06:48:14.327244   41675 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0315 06:48:14.327257   41675 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0315 06:48:14.327266   41675 command_runner.go:130] > pids_limit = 1024
	I0315 06:48:14.327278   41675 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0315 06:48:14.327289   41675 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0315 06:48:14.327300   41675 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0315 06:48:14.327313   41675 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0315 06:48:14.327322   41675 command_runner.go:130] > # log_size_max = -1
	I0315 06:48:14.327332   41675 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0315 06:48:14.327341   41675 command_runner.go:130] > # log_to_journald = false
	I0315 06:48:14.327350   41675 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0315 06:48:14.327370   41675 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0315 06:48:14.327385   41675 command_runner.go:130] > # Path to directory for container attach sockets.
	I0315 06:48:14.327400   41675 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0315 06:48:14.327412   41675 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0315 06:48:14.327421   41675 command_runner.go:130] > # bind_mount_prefix = ""
	I0315 06:48:14.327429   41675 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0315 06:48:14.327437   41675 command_runner.go:130] > # read_only = false
	I0315 06:48:14.327447   41675 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0315 06:48:14.327467   41675 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0315 06:48:14.327478   41675 command_runner.go:130] > # live configuration reload.
	I0315 06:48:14.327484   41675 command_runner.go:130] > # log_level = "info"
	I0315 06:48:14.327498   41675 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0315 06:48:14.327505   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.327511   41675 command_runner.go:130] > # log_filter = ""
	I0315 06:48:14.327519   41675 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0315 06:48:14.327528   41675 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0315 06:48:14.327538   41675 command_runner.go:130] > # separated by comma.
	I0315 06:48:14.327548   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327557   41675 command_runner.go:130] > # uid_mappings = ""
	I0315 06:48:14.327566   41675 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0315 06:48:14.327579   41675 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0315 06:48:14.327587   41675 command_runner.go:130] > # separated by comma.
	I0315 06:48:14.327602   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327611   41675 command_runner.go:130] > # gid_mappings = ""
	I0315 06:48:14.327622   41675 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0315 06:48:14.327642   41675 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 06:48:14.327655   41675 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 06:48:14.327670   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327680   41675 command_runner.go:130] > # minimum_mappable_uid = -1
	I0315 06:48:14.327693   41675 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0315 06:48:14.327706   41675 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 06:48:14.327718   41675 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 06:48:14.327732   41675 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 06:48:14.327741   41675 command_runner.go:130] > # minimum_mappable_gid = -1
	I0315 06:48:14.327754   41675 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0315 06:48:14.327766   41675 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0315 06:48:14.327778   41675 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0315 06:48:14.327788   41675 command_runner.go:130] > # ctr_stop_timeout = 30
	I0315 06:48:14.327805   41675 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0315 06:48:14.327817   41675 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0315 06:48:14.327828   41675 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0315 06:48:14.327840   41675 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0315 06:48:14.327855   41675 command_runner.go:130] > drop_infra_ctr = false
	I0315 06:48:14.327868   41675 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0315 06:48:14.327885   41675 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0315 06:48:14.327900   41675 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0315 06:48:14.327909   41675 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0315 06:48:14.327923   41675 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0315 06:48:14.327936   41675 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0315 06:48:14.327948   41675 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0315 06:48:14.327959   41675 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0315 06:48:14.327968   41675 command_runner.go:130] > # shared_cpuset = ""
	I0315 06:48:14.327981   41675 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0315 06:48:14.327992   41675 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0315 06:48:14.328001   41675 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0315 06:48:14.328015   41675 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0315 06:48:14.328024   41675 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0315 06:48:14.328036   41675 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0315 06:48:14.328049   41675 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0315 06:48:14.328058   41675 command_runner.go:130] > # enable_criu_support = false
	I0315 06:48:14.328066   41675 command_runner.go:130] > # Enable/disable the generation of the container,
	I0315 06:48:14.328076   41675 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0315 06:48:14.328085   41675 command_runner.go:130] > # enable_pod_events = false
	I0315 06:48:14.328093   41675 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 06:48:14.328104   41675 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 06:48:14.328114   41675 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0315 06:48:14.328123   41675 command_runner.go:130] > # default_runtime = "runc"
	I0315 06:48:14.328130   41675 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0315 06:48:14.328144   41675 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0315 06:48:14.328161   41675 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0315 06:48:14.328171   41675 command_runner.go:130] > # creation as a file is not desired either.
	I0315 06:48:14.328192   41675 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0315 06:48:14.328205   41675 command_runner.go:130] > # the hostname is being managed dynamically.
	I0315 06:48:14.328215   41675 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0315 06:48:14.328219   41675 command_runner.go:130] > # ]
	I0315 06:48:14.328228   41675 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0315 06:48:14.328246   41675 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0315 06:48:14.328259   41675 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0315 06:48:14.328270   41675 command_runner.go:130] > # Each entry in the table should follow the format:
	I0315 06:48:14.328274   41675 command_runner.go:130] > #
	I0315 06:48:14.328287   41675 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0315 06:48:14.328298   41675 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0315 06:48:14.328306   41675 command_runner.go:130] > # runtime_type = "oci"
	I0315 06:48:14.328360   41675 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0315 06:48:14.328373   41675 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0315 06:48:14.328378   41675 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0315 06:48:14.328385   41675 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0315 06:48:14.328392   41675 command_runner.go:130] > # monitor_env = []
	I0315 06:48:14.328400   41675 command_runner.go:130] > # privileged_without_host_devices = false
	I0315 06:48:14.328408   41675 command_runner.go:130] > # allowed_annotations = []
	I0315 06:48:14.328418   41675 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0315 06:48:14.328426   41675 command_runner.go:130] > # Where:
	I0315 06:48:14.328436   41675 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0315 06:48:14.328447   41675 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0315 06:48:14.328462   41675 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0315 06:48:14.328487   41675 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0315 06:48:14.328492   41675 command_runner.go:130] > #   in $PATH.
	I0315 06:48:14.328504   41675 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0315 06:48:14.328513   41675 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0315 06:48:14.328521   41675 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0315 06:48:14.328529   41675 command_runner.go:130] > #   state.
	I0315 06:48:14.328537   41675 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0315 06:48:14.328548   41675 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0315 06:48:14.328559   41675 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0315 06:48:14.328569   41675 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0315 06:48:14.328580   41675 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0315 06:48:14.328592   41675 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0315 06:48:14.328598   41675 command_runner.go:130] > #   The currently recognized values are:
	I0315 06:48:14.328610   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0315 06:48:14.328622   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0315 06:48:14.328637   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0315 06:48:14.328651   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0315 06:48:14.328664   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0315 06:48:14.328684   41675 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0315 06:48:14.328697   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0315 06:48:14.328709   41675 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0315 06:48:14.328725   41675 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0315 06:48:14.328737   41675 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0315 06:48:14.328746   41675 command_runner.go:130] > #   deprecated option "conmon".
	I0315 06:48:14.328758   41675 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0315 06:48:14.328769   41675 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0315 06:48:14.328782   41675 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0315 06:48:14.328791   41675 command_runner.go:130] > #   should be moved to the container's cgroup
	I0315 06:48:14.328801   41675 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0315 06:48:14.328825   41675 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0315 06:48:14.328837   41675 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0315 06:48:14.328847   41675 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0315 06:48:14.328854   41675 command_runner.go:130] > #
	I0315 06:48:14.328860   41675 command_runner.go:130] > # Using the seccomp notifier feature:
	I0315 06:48:14.328867   41675 command_runner.go:130] > #
	I0315 06:48:14.328875   41675 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0315 06:48:14.328887   41675 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0315 06:48:14.328894   41675 command_runner.go:130] > #
	I0315 06:48:14.328903   41675 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0315 06:48:14.328914   41675 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0315 06:48:14.328921   41675 command_runner.go:130] > #
	I0315 06:48:14.328928   41675 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0315 06:48:14.328936   41675 command_runner.go:130] > # feature.
	I0315 06:48:14.328940   41675 command_runner.go:130] > #
	I0315 06:48:14.328952   41675 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0315 06:48:14.328965   41675 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0315 06:48:14.328978   41675 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0315 06:48:14.328989   41675 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0315 06:48:14.329000   41675 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0315 06:48:14.329007   41675 command_runner.go:130] > #
	I0315 06:48:14.329015   41675 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0315 06:48:14.329027   41675 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0315 06:48:14.329036   41675 command_runner.go:130] > #
	I0315 06:48:14.329049   41675 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0315 06:48:14.329060   41675 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0315 06:48:14.329068   41675 command_runner.go:130] > #
	I0315 06:48:14.329078   41675 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0315 06:48:14.329096   41675 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0315 06:48:14.329104   41675 command_runner.go:130] > # limitation.
	I0315 06:48:14.329113   41675 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0315 06:48:14.329118   41675 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0315 06:48:14.329127   41675 command_runner.go:130] > runtime_type = "oci"
	I0315 06:48:14.329135   41675 command_runner.go:130] > runtime_root = "/run/runc"
	I0315 06:48:14.329145   41675 command_runner.go:130] > runtime_config_path = ""
	I0315 06:48:14.329155   41675 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0315 06:48:14.329166   41675 command_runner.go:130] > monitor_cgroup = "pod"
	I0315 06:48:14.329176   41675 command_runner.go:130] > monitor_exec_cgroup = ""
	I0315 06:48:14.329183   41675 command_runner.go:130] > monitor_env = [
	I0315 06:48:14.329192   41675 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 06:48:14.329200   41675 command_runner.go:130] > ]
	I0315 06:48:14.329207   41675 command_runner.go:130] > privileged_without_host_devices = false
	I0315 06:48:14.329221   41675 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0315 06:48:14.329237   41675 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0315 06:48:14.329250   41675 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0315 06:48:14.329260   41675 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0315 06:48:14.329274   41675 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0315 06:48:14.329284   41675 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0315 06:48:14.329299   41675 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0315 06:48:14.329313   41675 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0315 06:48:14.329324   41675 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0315 06:48:14.329337   41675 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0315 06:48:14.329344   41675 command_runner.go:130] > # Example:
	I0315 06:48:14.329351   41675 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0315 06:48:14.329360   41675 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0315 06:48:14.329370   41675 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0315 06:48:14.329381   41675 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0315 06:48:14.329389   41675 command_runner.go:130] > # cpuset = 0
	I0315 06:48:14.329398   41675 command_runner.go:130] > # cpushares = "0-1"
	I0315 06:48:14.329406   41675 command_runner.go:130] > # Where:
	I0315 06:48:14.329415   41675 command_runner.go:130] > # The workload name is workload-type.
	I0315 06:48:14.329426   41675 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0315 06:48:14.329438   41675 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0315 06:48:14.329446   41675 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0315 06:48:14.329464   41675 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0315 06:48:14.329472   41675 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0315 06:48:14.329479   41675 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0315 06:48:14.329488   41675 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0315 06:48:14.329494   41675 command_runner.go:130] > # Default value is set to true
	I0315 06:48:14.329505   41675 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0315 06:48:14.329512   41675 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0315 06:48:14.329522   41675 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0315 06:48:14.329532   41675 command_runner.go:130] > # Default value is set to 'false'
	I0315 06:48:14.329542   41675 command_runner.go:130] > # disable_hostport_mapping = false
	I0315 06:48:14.329553   41675 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0315 06:48:14.329561   41675 command_runner.go:130] > #
	I0315 06:48:14.329571   41675 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0315 06:48:14.329583   41675 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0315 06:48:14.329595   41675 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0315 06:48:14.329607   41675 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0315 06:48:14.329617   41675 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0315 06:48:14.329623   41675 command_runner.go:130] > [crio.image]
	I0315 06:48:14.329633   41675 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0315 06:48:14.329643   41675 command_runner.go:130] > # default_transport = "docker://"
	I0315 06:48:14.329651   41675 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0315 06:48:14.329664   41675 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0315 06:48:14.329669   41675 command_runner.go:130] > # global_auth_file = ""
	I0315 06:48:14.329677   41675 command_runner.go:130] > # The image used to instantiate infra containers.
	I0315 06:48:14.329685   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.329696   41675 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0315 06:48:14.329707   41675 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0315 06:48:14.329716   41675 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0315 06:48:14.329726   41675 command_runner.go:130] > # This option supports live configuration reload.
	I0315 06:48:14.329732   41675 command_runner.go:130] > # pause_image_auth_file = ""
	I0315 06:48:14.329743   41675 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0315 06:48:14.329756   41675 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0315 06:48:14.329769   41675 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0315 06:48:14.329786   41675 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0315 06:48:14.329796   41675 command_runner.go:130] > # pause_command = "/pause"
	I0315 06:48:14.329806   41675 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0315 06:48:14.329825   41675 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0315 06:48:14.329837   41675 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0315 06:48:14.329849   41675 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0315 06:48:14.329859   41675 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0315 06:48:14.329868   41675 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0315 06:48:14.329872   41675 command_runner.go:130] > # pinned_images = [
	I0315 06:48:14.329875   41675 command_runner.go:130] > # ]
	I0315 06:48:14.329881   41675 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0315 06:48:14.329890   41675 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0315 06:48:14.329901   41675 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0315 06:48:14.329911   41675 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0315 06:48:14.329916   41675 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0315 06:48:14.329922   41675 command_runner.go:130] > # signature_policy = ""
	I0315 06:48:14.329927   41675 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0315 06:48:14.329936   41675 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0315 06:48:14.329943   41675 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0315 06:48:14.329949   41675 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0315 06:48:14.329957   41675 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0315 06:48:14.329961   41675 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0315 06:48:14.329970   41675 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0315 06:48:14.329975   41675 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0315 06:48:14.329982   41675 command_runner.go:130] > # changing them here.
	I0315 06:48:14.329985   41675 command_runner.go:130] > # insecure_registries = [
	I0315 06:48:14.329990   41675 command_runner.go:130] > # ]
	I0315 06:48:14.329996   41675 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0315 06:48:14.330004   41675 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0315 06:48:14.330008   41675 command_runner.go:130] > # image_volumes = "mkdir"
	I0315 06:48:14.330013   41675 command_runner.go:130] > # Temporary directory to use for storing big files
	I0315 06:48:14.330019   41675 command_runner.go:130] > # big_files_temporary_dir = ""
	I0315 06:48:14.330024   41675 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0315 06:48:14.330030   41675 command_runner.go:130] > # CNI plugins.
	I0315 06:48:14.330033   41675 command_runner.go:130] > [crio.network]
	I0315 06:48:14.330038   41675 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0315 06:48:14.330046   41675 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0315 06:48:14.330054   41675 command_runner.go:130] > # cni_default_network = ""
	I0315 06:48:14.330062   41675 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0315 06:48:14.330074   41675 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0315 06:48:14.330082   41675 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0315 06:48:14.330086   41675 command_runner.go:130] > # plugin_dirs = [
	I0315 06:48:14.330091   41675 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0315 06:48:14.330094   41675 command_runner.go:130] > # ]
	I0315 06:48:14.330102   41675 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0315 06:48:14.330105   41675 command_runner.go:130] > [crio.metrics]
	I0315 06:48:14.330110   41675 command_runner.go:130] > # Globally enable or disable metrics support.
	I0315 06:48:14.330113   41675 command_runner.go:130] > enable_metrics = true
	I0315 06:48:14.330120   41675 command_runner.go:130] > # Specify enabled metrics collectors.
	I0315 06:48:14.330124   41675 command_runner.go:130] > # Per default all metrics are enabled.
	I0315 06:48:14.330133   41675 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0315 06:48:14.330139   41675 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0315 06:48:14.330147   41675 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0315 06:48:14.330151   41675 command_runner.go:130] > # metrics_collectors = [
	I0315 06:48:14.330157   41675 command_runner.go:130] > # 	"operations",
	I0315 06:48:14.330161   41675 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0315 06:48:14.330165   41675 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0315 06:48:14.330173   41675 command_runner.go:130] > # 	"operations_errors",
	I0315 06:48:14.330177   41675 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0315 06:48:14.330184   41675 command_runner.go:130] > # 	"image_pulls_by_name",
	I0315 06:48:14.330191   41675 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0315 06:48:14.330200   41675 command_runner.go:130] > # 	"image_pulls_failures",
	I0315 06:48:14.330206   41675 command_runner.go:130] > # 	"image_pulls_successes",
	I0315 06:48:14.330216   41675 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0315 06:48:14.330222   41675 command_runner.go:130] > # 	"image_layer_reuse",
	I0315 06:48:14.330237   41675 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0315 06:48:14.330246   41675 command_runner.go:130] > # 	"containers_oom_total",
	I0315 06:48:14.330252   41675 command_runner.go:130] > # 	"containers_oom",
	I0315 06:48:14.330261   41675 command_runner.go:130] > # 	"processes_defunct",
	I0315 06:48:14.330267   41675 command_runner.go:130] > # 	"operations_total",
	I0315 06:48:14.330276   41675 command_runner.go:130] > # 	"operations_latency_seconds",
	I0315 06:48:14.330283   41675 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0315 06:48:14.330293   41675 command_runner.go:130] > # 	"operations_errors_total",
	I0315 06:48:14.330299   41675 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0315 06:48:14.330309   41675 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0315 06:48:14.330322   41675 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0315 06:48:14.330332   41675 command_runner.go:130] > # 	"image_pulls_success_total",
	I0315 06:48:14.330339   41675 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0315 06:48:14.330348   41675 command_runner.go:130] > # 	"containers_oom_count_total",
	I0315 06:48:14.330358   41675 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0315 06:48:14.330369   41675 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0315 06:48:14.330374   41675 command_runner.go:130] > # ]
	I0315 06:48:14.330385   41675 command_runner.go:130] > # The port on which the metrics server will listen.
	I0315 06:48:14.330395   41675 command_runner.go:130] > # metrics_port = 9090
	I0315 06:48:14.330406   41675 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0315 06:48:14.330414   41675 command_runner.go:130] > # metrics_socket = ""
	I0315 06:48:14.330423   41675 command_runner.go:130] > # The certificate for the secure metrics server.
	I0315 06:48:14.330434   41675 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0315 06:48:14.330441   41675 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0315 06:48:14.330448   41675 command_runner.go:130] > # certificate on any modification event.
	I0315 06:48:14.330451   41675 command_runner.go:130] > # metrics_cert = ""
	I0315 06:48:14.330456   41675 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0315 06:48:14.330463   41675 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0315 06:48:14.330467   41675 command_runner.go:130] > # metrics_key = ""
	I0315 06:48:14.330472   41675 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0315 06:48:14.330478   41675 command_runner.go:130] > [crio.tracing]
	I0315 06:48:14.330483   41675 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0315 06:48:14.330488   41675 command_runner.go:130] > # enable_tracing = false
	I0315 06:48:14.330493   41675 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0315 06:48:14.330500   41675 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0315 06:48:14.330506   41675 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0315 06:48:14.330514   41675 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0315 06:48:14.330518   41675 command_runner.go:130] > # CRI-O NRI configuration.
	I0315 06:48:14.330524   41675 command_runner.go:130] > [crio.nri]
	I0315 06:48:14.330528   41675 command_runner.go:130] > # Globally enable or disable NRI.
	I0315 06:48:14.330531   41675 command_runner.go:130] > # enable_nri = false
	I0315 06:48:14.330535   41675 command_runner.go:130] > # NRI socket to listen on.
	I0315 06:48:14.330540   41675 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0315 06:48:14.330546   41675 command_runner.go:130] > # NRI plugin directory to use.
	I0315 06:48:14.330551   41675 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0315 06:48:14.330558   41675 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0315 06:48:14.330568   41675 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0315 06:48:14.330575   41675 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0315 06:48:14.330580   41675 command_runner.go:130] > # nri_disable_connections = false
	I0315 06:48:14.330586   41675 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0315 06:48:14.330590   41675 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0315 06:48:14.330598   41675 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0315 06:48:14.330602   41675 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0315 06:48:14.330610   41675 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0315 06:48:14.330613   41675 command_runner.go:130] > [crio.stats]
	I0315 06:48:14.330619   41675 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0315 06:48:14.330626   41675 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0315 06:48:14.330631   41675 command_runner.go:130] > # stats_collection_period = 0
	I0315 06:48:14.330665   41675 command_runner.go:130] ! time="2024-03-15 06:48:14.294307123Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0315 06:48:14.330685   41675 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0315 06:48:14.330811   41675 cni.go:84] Creating CNI manager for ""
	I0315 06:48:14.330825   41675 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 06:48:14.330834   41675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:48:14.330851   41675 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-763469 NodeName:multinode-763469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:48:14.330989   41675 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-763469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:48:14.331055   41675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 06:48:14.343061   41675 command_runner.go:130] > kubeadm
	I0315 06:48:14.343084   41675 command_runner.go:130] > kubectl
	I0315 06:48:14.343089   41675 command_runner.go:130] > kubelet
	I0315 06:48:14.343106   41675 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:48:14.343148   41675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 06:48:14.354347   41675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0315 06:48:14.374506   41675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:48:14.394211   41675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0315 06:48:14.415822   41675 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0315 06:48:14.420096   41675 command_runner.go:130] > 192.168.39.29	control-plane.minikube.internal
	I0315 06:48:14.420172   41675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:48:14.577125   41675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:48:14.593458   41675 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469 for IP: 192.168.39.29
	I0315 06:48:14.593487   41675 certs.go:194] generating shared ca certs ...
	I0315 06:48:14.593526   41675 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:48:14.593688   41675 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:48:14.593755   41675 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:48:14.593768   41675 certs.go:256] generating profile certs ...
	I0315 06:48:14.593864   41675 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/client.key
	I0315 06:48:14.593939   41675 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key.722d4f19
	I0315 06:48:14.593999   41675 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key
	I0315 06:48:14.594013   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 06:48:14.594030   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 06:48:14.594045   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 06:48:14.594063   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 06:48:14.594078   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 06:48:14.594095   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 06:48:14.594105   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 06:48:14.594114   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 06:48:14.594162   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:48:14.594191   41675 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:48:14.594202   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:48:14.594242   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:48:14.594289   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:48:14.594325   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:48:14.594395   41675 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:48:14.594428   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem -> /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.594441   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.594452   41675 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.594987   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:48:14.620174   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:48:14.644258   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:48:14.668654   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:48:14.692544   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 06:48:14.717635   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 06:48:14.742686   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:48:14.767487   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/multinode-763469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 06:48:14.792204   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:48:14.817523   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:48:14.847861   41675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:48:14.877639   41675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:48:14.897129   41675 ssh_runner.go:195] Run: openssl version
	I0315 06:48:14.903719   41675 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0315 06:48:14.903794   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:48:14.916303   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921175   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921269   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.921332   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:48:14.927446   41675 command_runner.go:130] > 51391683
	I0315 06:48:14.927523   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:48:14.939131   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:48:14.951876   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957109   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957143   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.957207   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:48:14.962857   41675 command_runner.go:130] > 3ec20f2e
	I0315 06:48:14.962933   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:48:14.973317   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:48:14.985422   41675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.989995   41675 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.990023   41675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.990071   41675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:48:14.996480   41675 command_runner.go:130] > b5213941
	I0315 06:48:14.996811   41675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:48:15.007049   41675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:48:15.011510   41675 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:48:15.011528   41675 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0315 06:48:15.011534   41675 command_runner.go:130] > Device: 253,1	Inode: 9432637     Links: 1
	I0315 06:48:15.011541   41675 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 06:48:15.011546   41675 command_runner.go:130] > Access: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011552   41675 command_runner.go:130] > Modify: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011557   41675 command_runner.go:130] > Change: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011561   41675 command_runner.go:130] >  Birth: 2024-03-15 06:41:57.314468403 +0000
	I0315 06:48:15.011690   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:48:15.017529   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.017596   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:48:15.023075   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.023221   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:48:15.028940   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.029014   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:48:15.034501   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.034675   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:48:15.040404   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.040477   41675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:48:15.046469   41675 command_runner.go:130] > Certificate will not expire
	I0315 06:48:15.046636   41675 kubeadm.go:391] StartCluster: {Name:multinode-763469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-763469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:48:15.046781   41675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:48:15.046828   41675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:48:15.088618   41675 command_runner.go:130] > 5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337
	I0315 06:48:15.088650   41675 command_runner.go:130] > 4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee
	I0315 06:48:15.088659   41675 command_runner.go:130] > 2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0
	I0315 06:48:15.088670   41675 command_runner.go:130] > 5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c
	I0315 06:48:15.088812   41675 command_runner.go:130] > 41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555
	I0315 06:48:15.089071   41675 command_runner.go:130] > 68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948
	I0315 06:48:15.089098   41675 command_runner.go:130] > e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe
	I0315 06:48:15.089242   41675 command_runner.go:130] > 26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886
	I0315 06:48:15.090814   41675 cri.go:89] found id: "5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337"
	I0315 06:48:15.090829   41675 cri.go:89] found id: "4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee"
	I0315 06:48:15.090832   41675 cri.go:89] found id: "2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0"
	I0315 06:48:15.090836   41675 cri.go:89] found id: "5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c"
	I0315 06:48:15.090838   41675 cri.go:89] found id: "41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555"
	I0315 06:48:15.090841   41675 cri.go:89] found id: "68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948"
	I0315 06:48:15.090844   41675 cri.go:89] found id: "e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe"
	I0315 06:48:15.090846   41675 cri.go:89] found id: "26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886"
	I0315 06:48:15.090859   41675 cri.go:89] found id: ""
	I0315 06:48:15.090908   41675 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.740426020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485526740402708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a5e7b04-6ca8-4e51-9c6d-3371ac3aa89e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.741176932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cae3c51d-3c20-4831-b4da-9b4b62ee317e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.741228265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cae3c51d-3c20-4831-b4da-9b4b62ee317e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.741558177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cae3c51d-3c20-4831-b4da-9b4b62ee317e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.756662930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=776752da-ae0c-43cd-9730-76d319b0bb76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.756886380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=776752da-ae0c-43cd-9730-76d319b0bb76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.757756255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,PodSandboxId:834e045b863750c2d5fbe17baccb1c31675310fce27d0bf4061f2fb2946b20f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710485335726262699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,PodSandboxId:f6f04004382d02f7ca59ee2a060901ec7e6238b3a9cad0080ee3ead6fc6a8ca5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710485302257261170,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,PodSandboxId:56051dba7ac11ca682e6451f6d4035b56070d9605d2ba6187dd15eb7a03f3e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710485302224050448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,PodSandboxId:8ef3b383c17cac849637e8800107e1c72926e3f7ccec1ddc3881eced68720da7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710485301998389559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,PodSandboxId:b896f99a00ffcd1fbb65860b13590e12a314171f10cf062c01b5a20b7f7e1fb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485302028296104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.k
ubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,PodSandboxId:7d842a768b7ce33af4b0a281a6fce2f5de27c8e1c984ab69aadb9dfb3da9982c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710485297227069545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,PodSandboxId:9975d5c805a31142efbb78f50fe53d9cc8905af0fad930c4a2bdd816ab3ac420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710485297233595812,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e16
39c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,PodSandboxId:bcc63ec7ba04fadf4e77c2f04740a7a066fea0a57ae377bf2faf4f0634af1a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710485297123403346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,PodSandboxId:42b03cdc51414b74591396fd1b0a90ba5fad3da16d3b10392a118fa5701f3f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710485297136274244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2426966c6f2768441448b540bd3e1eb3e068848d52792b03212c065d9f21b5,PodSandboxId:2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710484994851141207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337,PodSandboxId:91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710484945874575034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d3449c0f30162f38f66e99faad6d7009193491cf764aeb8f26518a4dd4abeee,PodSandboxId:dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710484945841410350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0,PodSandboxId:968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710484944429600474,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c,PodSandboxId:88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710484940787356230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555,PodSandboxId:36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710484921308568987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce124283
63,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948,PodSandboxId:7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710484921264408223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46
707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe,PodSandboxId:9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710484921227047174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,
},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886,PodSandboxId:27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710484921184199541,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=776752da-ae0c-43cd-9730-76d319b0bb76 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.759381302Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ec5b031d-018f-4f45-be74-30c9fe4b84bf name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.759549585Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fa907528940347fc9096f5e21638ab3d0045d0edec105cb5869435f2601bea1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485335769518987,StartedAt:1710485335810281206,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-tsdl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4247590d-21e4-4ee1-8989-1cc15ec40318,},Annotations:map[string]string{io.kubernetes.container.hash: de3551fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4247590d-21e4-4ee1-8989-1cc15ec40318/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4247590d-21e4-4ee1-8989-1cc15ec40318/containers/busybox/f400d9d4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/4247590d-21e4-4ee1-8989-1cc15ec40318/volumes/kubernetes.io~projected/kube-api-access-z9bzh,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-5b5d89c9d6-tsdl7_4247590d-21e4-4ee1-8989-1cc15ec40318/busybox/1.log,Resources:&
ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ec5b031d-018f-4f45-be74-30c9fe4b84bf name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.760433441Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf7406d6-9a77-4c15-82d3-92f64862fe1b name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.760671085Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485302379573427,StartedAt:1710485302412881262,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240202-8f1494ea,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r6vss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5dba32-45a3-44e1-80e2-f585e324cf82,},Annotations:map[string]string{io.kubernetes.container.hash: bda69127,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ee5dba32-45a3-44e1-80e2-f585e324cf82/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ee5dba32-45a3-44e1-80e2-f585e324cf82/containers/kindnet-cni/8ff8c85d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/ee5dba32-45a3-44e1-80e2-f585e324cf82/volumes/kubernetes.io~projected/kube-api-access-4rr7p,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-r6vss_ee5dba32-45a3-44e1-80e2-f585e324cf82/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf7406d6-9a77-4c15-82d3-92f648
62fe1b name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.761331008Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=45df2772-aebd-408c-98c2-334e679ce1da name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.761468456Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485302330951146,StartedAt:1710485302363667227,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6j8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b13912-e637-4f97-9f58-16a39483c91e,},Annotations:map[string]string{io.kubernetes.container.hash: 9924f372,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/17b13912-e637-4f97-9f58-16a39483c91e/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/17b13912-e637-4f97-9f58-16a39483c91e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/17b13912-e637-4f97-9f58-16a39483c91e/containers/coredns/d1c7f983,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/17b13912-e637-4f97-9f58-16a39483c91e/volumes/kubernetes.io~projected/kube-api-access-l65d4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-x6j8r_17b13912-e637-4f97-9f58-16a39483c91e/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=45df2772-aebd-408c-98c2-334e679ce1da name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.762080602Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8401c990-2573-4512-a909-5634cd9d8a4c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.762211265Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485302114190747,StartedAt:1710485302147983082,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbg48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40630839-8887-4f18-b35c-e4f1f0e3a513,},Annotations:map[string]string{io.kubernetes.container.hash: 1f43420c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/40630839-8887-4f18-b35c-e4f1f0e3a513/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/40630839-8887-4f18-b35c-e4f1f0e3a513/containers/kube-proxy/fd3dd3ab,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/40630839-8887-4f18-b35c-e4f1f0e3a513/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/40630839-8887-4f18-b35c-e4f1f0e3a513/volumes/kubernetes.io~projected/kube-api-access-vq2lq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-zbg48_40630839-8887-4f18-b35c-e4f1f0e3a513/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=8401c990-2573-4512-a909-5634cd9d8a4c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.762646270Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,Verbose:false,}" file="otel-collector/interceptors.go:62" id=edd28d5c-597d-41f4-b530-8ff6842d4699 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.762822046Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b52525fcaadc2f9091e63b7ebb53abe1b8d580925c7784bab5f6b95b919ffbea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485302100856123,StartedAt:1710485302133262648,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569,},Annotations:map[string]string{io.kubernetes.container.hash: 16d1ac0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/containers/storage-provisioner/aa01503c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/volumes/kubernetes.io~projected/kube-api-access-jc8z2,Readonly:true,SelinuxRelabel:false
,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/storage-provisioner/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=edd28d5c-597d-41f4-b530-8ff6842d4699 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.763232682Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,Verbose:false,}" file="otel-collector/interceptors.go:62" id=11569dce-b564-4169-acc2-362590688822 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.763357716Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485297339720443,StartedAt:1710485297453710826,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffba728ba0f963033d8e304d674bfb10,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ae30e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ffba728ba0f963033d8e304d674bfb10/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ffba728ba0f963033d8e304d674bfb10/containers/kube-apiserver/f393a73f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-763469_ffba728ba0f963033d8e304d674bfb10/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=11569dce-b564-4169-acc2-362590688822 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.763930262Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0ec89cdf-0b3f-4bf8-bd01-018a5b1698ae name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.764093088Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485297337912634,StartedAt:1710485297399482716,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce435b119544c4c614d66991282e3c51,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ce435b119544c4c614d66991282e3c51/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ce435b119544c4c614d66991282e3c51/containers/kube-scheduler/f28a34fe,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-multinode-763469_ce435b119544c4c614d66991282e3c51/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeri
od:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0ec89cdf-0b3f-4bf8-bd01-018a5b1698ae name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.764527724Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f15b9130-9fa7-4d47-be34-040681edd0bb name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.764643346Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485297204551922,StartedAt:1710485297314305069,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf1cb0928f1352bca011ce12428363,},Annotations:map[string]string{io.kubernetes.container.hash: 363bf79e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ddaf1cb0928f1352bca011ce12428363/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ddaf1cb0928f1352bca011ce12428363/containers/etcd/d16aed51,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-mu
ltinode-763469_ddaf1cb0928f1352bca011ce12428363/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f15b9130-9fa7-4d47-be34-040681edd0bb name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.765084901Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c11dfe71-0747-4efb-b20a-516607699014 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 06:52:06 multinode-763469 crio[2833]: time="2024-03-15 06:52:06.765191855Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710485297194670975,StartedAt:1710485297300306559,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-763469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b83f83f07ca3131c46707e11d52155c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b83f83f07ca3131c46707e11d52155c8/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b83f83f07ca3131c46707e11d52155c8/containers/kube-controller-manager/261d2b59,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-763469_b83f83f07ca3131c46707e11d52155c8/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c11dfe71-0747-4efb-b20a-516607699014 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa90752894034       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   834e045b86375       busybox-5b5d89c9d6-tsdl7
	17be560545c16       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   f6f04004382d0       kindnet-r6vss
	575b095c9b6ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   56051dba7ac11       coredns-5dd5756b68-x6j8r
	b52525fcaadc2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   b896f99a00ffc       storage-provisioner
	252417b5766a5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   8ef3b383c17ca       kube-proxy-zbg48
	b95fae7e21b3c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   9975d5c805a31       kube-scheduler-multinode-763469
	df9f4d76cb959       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   7d842a768b7ce       kube-apiserver-multinode-763469
	1a0631cbffdeb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   42b03cdc51414       kube-controller-manager-multinode-763469
	4c9c05513bc4c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   bcc63ec7ba04f       etcd-multinode-763469
	de2426966c6f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   2be90b6664676       busybox-5b5d89c9d6-tsdl7
	5b2463b16c7ce       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   91971b4934cb0       coredns-5dd5756b68-x6j8r
	4d3449c0f3016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   dda1a3b456791       storage-provisioner
	2b9c8e78c1a0c       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   968021900c2b1       kindnet-r6vss
	5b1efdd4fe112       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   88c3400102352       kube-proxy-zbg48
	41d71a9d86c83       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   36458949e6331       etcd-multinode-763469
	68a042e4e4694       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   7604fe166214c       kube-controller-manager-multinode-763469
	e4cf73083a60d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   9d323dcc0e31a       kube-apiserver-multinode-763469
	26e6081b0c5f0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   27f1756dc37d8       kube-scheduler-multinode-763469
	
	
	==> coredns [575b095c9b6ea619af7e3eee03263df65df808160cd028c8c45c99b3384a766f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60665 - 45009 "HINFO IN 1386717294849346258.5119469821599325126. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00890861s
	
	
	==> coredns [5b2463b16c7ce1c62183d4a13bb262d54088a65b9357b42cd66d032dc7d24337] <==
	[INFO] 10.244.0.3:52053 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001984596s
	[INFO] 10.244.0.3:47293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079275s
	[INFO] 10.244.0.3:42531 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055645s
	[INFO] 10.244.0.3:55263 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001275632s
	[INFO] 10.244.0.3:48514 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078172s
	[INFO] 10.244.0.3:51364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050546s
	[INFO] 10.244.0.3:59743 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007089s
	[INFO] 10.244.1.2:48857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169242s
	[INFO] 10.244.1.2:42639 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108367s
	[INFO] 10.244.1.2:54971 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134511s
	[INFO] 10.244.1.2:59158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071619s
	[INFO] 10.244.0.3:46690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105523s
	[INFO] 10.244.0.3:40047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094632s
	[INFO] 10.244.0.3:50430 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061466s
	[INFO] 10.244.0.3:52007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076394s
	[INFO] 10.244.1.2:48901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159478s
	[INFO] 10.244.1.2:39733 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192713s
	[INFO] 10.244.1.2:39738 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000234906s
	[INFO] 10.244.1.2:45872 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134492s
	[INFO] 10.244.0.3:42134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106099s
	[INFO] 10.244.0.3:42483 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121431s
	[INFO] 10.244.0.3:49061 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063754s
	[INFO] 10.244.0.3:35844 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098087s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-763469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-763469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=multinode-763469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_42_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-763469
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:52:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:48:21 +0000   Fri, 15 Mar 2024 06:42:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-763469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b034b4c5ab34dcab4dd3f5b0751ccfd
	  System UUID:                9b034b4c-5ab3-4dca-b4dd-3f5b0751ccfd
	  Boot ID:                    7eeb4a26-f179-434e-abfe-6a7b68cb5c71
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tsdl7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 coredns-5dd5756b68-x6j8r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m47s
	  kube-system                 etcd-multinode-763469                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-r6vss                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m47s
	  kube-system                 kube-apiserver-multinode-763469             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-763469    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zbg48                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-scheduler-multinode-763469             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m47s                  node-controller  Node multinode-763469 event: Registered Node multinode-763469 in Controller
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-763469 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-763469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-763469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-763469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-763469 event: Registered Node multinode-763469 in Controller
	
	
	Name:               multinode-763469-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-763469-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=multinode-763469
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T06_49_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:49:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-763469-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:49:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:50:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:50:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:50:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 06:49:32 +0000   Fri, 15 Mar 2024 06:50:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-763469-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 4715799d3e8f4535b9aad0aadf936ade
	  System UUID:                4715799d-3e8f-4535-b9aa-d0aadf936ade
	  Boot ID:                    0a9c0c55-d554-4a2a-bcd5-36c90cd746e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-pk8lw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-zfcwm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m9s
	  kube-system                 kube-proxy-b8jmp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 9m3s                  kube-proxy       
	  Normal  Starting                 3m2s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m9s (x5 over 9m10s)  kubelet          Node multinode-763469-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m9s (x5 over 9m10s)  kubelet          Node multinode-763469-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m9s (x5 over 9m10s)  kubelet          Node multinode-763469-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m                    kubelet          Node multinode-763469-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x5 over 3m7s)   kubelet          Node multinode-763469-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x5 over 3m7s)   kubelet          Node multinode-763469-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x5 over 3m7s)   kubelet          Node multinode-763469-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m58s                 kubelet          Node multinode-763469-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                  node-controller  Node multinode-763469-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.183729] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.170501] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.270733] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.848588] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.060560] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.527046] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.571650] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 06:42] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +0.087785] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.186376] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.121856] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +5.088517] kauditd_printk_skb: 60 callbacks suppressed
	[Mar15 06:43] kauditd_printk_skb: 12 callbacks suppressed
	[Mar15 06:48] systemd-fstab-generator[2751]: Ignoring "noauto" option for root device
	[  +0.151156] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.187955] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.148580] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.264421] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +1.669915] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +1.697270] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[  +1.032152] kauditd_printk_skb: 164 callbacks suppressed
	[  +5.137046] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.810497] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.445533] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[ +20.065071] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [41d71a9d86c8350c6ed13a351e91f3d22e3b283f7fff6451ab27a1401e443555] <==
	{"level":"info","ts":"2024-03-15T06:42:02.717097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 2"}
	{"level":"info","ts":"2024-03-15T06:42:02.717155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-15T06:42:02.722468Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:multinode-763469 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:42:02.722559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:42:02.723088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:42:02.723686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:42:02.724118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	{"level":"info","ts":"2024-03-15T06:42:02.724231Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731681Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.731735Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:42:02.743159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:42:02.743219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-03-15T06:43:46.832867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.558822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1597526901996223042 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" mod_revision:593 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-trvg3x7bmcslyqnddo6n7s7g4i\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T06:43:46.83332Z","caller":"traceutil/trace.go:171","msg":"trace[1572112730] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"199.606053ms","start":"2024-03-15T06:43:46.633666Z","end":"2024-03-15T06:43:46.833272Z","steps":["trace[1572112730] 'process raft request'  (duration: 26.665163ms)","trace[1572112730] 'compare'  (duration: 171.277086ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T06:46:40.615964Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T06:46:40.616112Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-763469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"]}
	{"level":"warn","ts":"2024-03-15T06:46:40.616305Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.616393Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.694094Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.29:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T06:46:40.694295Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.29:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T06:46:40.69442Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97e52954629f162b","current-leader-member-id":"97e52954629f162b"}
	{"level":"info","ts":"2024-03-15T06:46:40.697365Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:46:40.697518Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:46:40.697593Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-763469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"]}
	
	
	==> etcd [4c9c05513bc4c94a5f20fdc24a5abf70b14dca01ca195c685d3012b61a30e61f] <==
	{"level":"info","ts":"2024-03-15T06:48:17.6279Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","added-peer-id":"97e52954629f162b","added-peer-peer-urls":["https://192.168.39.29:2380"]}
	{"level":"info","ts":"2024-03-15T06:48:17.628091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:48:17.628146Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:48:17.631302Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.631424Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.631453Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T06:48:17.644462Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T06:48:17.64612Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"97e52954629f162b","initial-advertise-peer-urls":["https://192.168.39.29:2380"],"listen-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.29:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T06:48:17.650049Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T06:48:17.645863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:48:17.650154Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-03-15T06:48:19.482154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgPreVoteResp from 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-03-15T06:48:19.482279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.482304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-03-15T06:48:19.488183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:48:19.488114Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:multinode-763469 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:48:19.489143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:48:19.489703Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.29:2379"}
	{"level":"info","ts":"2024-03-15T06:48:19.490387Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:48:19.490589Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:48:19.490627Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 06:52:07 up 10 min,  0 users,  load average: 0.90, 0.55, 0.27
	Linux multinode-763469 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17be560545c16e2104653fc29b470f62a4618272e954d705106b9062fd05dad0] <==
	I0315 06:51:03.146520       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:51:13.155278       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:51:13.155457       1 main.go:227] handling current node
	I0315 06:51:13.155548       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:51:13.155640       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:51:23.165491       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:51:23.165717       1 main.go:227] handling current node
	I0315 06:51:23.165762       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:51:23.166017       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:51:33.176197       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:51:33.176345       1 main.go:227] handling current node
	I0315 06:51:33.176380       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:51:33.176398       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:51:43.185893       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:51:43.186016       1 main.go:227] handling current node
	I0315 06:51:43.186040       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:51:43.186057       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:51:53.201636       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:51:53.202034       1 main.go:227] handling current node
	I0315 06:51:53.202204       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:51:53.202410       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:52:03.208598       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:52:03.208746       1 main.go:227] handling current node
	I0315 06:52:03.208836       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:52:03.208868       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [2b9c8e78c1a0cbc1c99a93eefda641900dd5e99ed296cf1b8435bf6df5db64a0] <==
	I0315 06:45:55.494005       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:05.499554       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:05.499621       1 main.go:227] handling current node
	I0315 06:46:05.499633       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:05.499645       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:05.499856       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:05.499882       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:15.509503       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:15.509628       1 main.go:227] handling current node
	I0315 06:46:15.509646       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:15.509653       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:15.509912       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:15.509991       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:25.524143       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:25.524196       1 main.go:227] handling current node
	I0315 06:46:25.524215       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:25.524230       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:25.524350       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:25.524377       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	I0315 06:46:35.529383       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0315 06:46:35.529485       1 main.go:227] handling current node
	I0315 06:46:35.529514       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0315 06:46:35.529532       1 main.go:250] Node multinode-763469-m02 has CIDR [10.244.1.0/24] 
	I0315 06:46:35.529756       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0315 06:46:35.529857       1 main.go:250] Node multinode-763469-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [df9f4d76cb959727fb11cc032e14be389d97feb656a62062e7e86d9e5f113901] <==
	I0315 06:48:20.932089       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 06:48:20.932130       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 06:48:20.932208       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 06:48:21.058570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0315 06:48:21.066090       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 06:48:21.126019       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 06:48:21.126592       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 06:48:21.126679       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 06:48:21.126918       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 06:48:21.130255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 06:48:21.130286       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 06:48:21.132448       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 06:48:21.132877       1 aggregator.go:166] initial CRD sync complete...
	I0315 06:48:21.132919       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 06:48:21.132925       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 06:48:21.132931       1 cache.go:39] Caches are synced for autoregister controller
	E0315 06:48:21.143258       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0315 06:48:21.936186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 06:48:23.850205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 06:48:23.971964       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0315 06:48:23.980620       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0315 06:48:24.060525       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 06:48:24.071147       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 06:48:34.118091       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 06:48:34.165584       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e4cf73083a60d92a9ed702e7336c64926e0a73d0c6709b4a9422e9a10cebcafe] <==
	I0315 06:42:07.477422       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 06:42:20.071275       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0315 06:42:20.112027       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0315 06:46:40.610980       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0315 06:46:40.636276       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.636716       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637322       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637395       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637423       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637460       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637489       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637543       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637605       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637633       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637666       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637692       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637850       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637881       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637909       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637933       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637965       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.637993       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.638025       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.641293       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 06:46:40.643453       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1a0631cbffdeb2a0561c6c9af6d38520f02311ba8dcfd1696806f9d3d9697ba7] <==
	I0315 06:49:09.196846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:09.218296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.518µs"
	I0315 06:49:09.234041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.778µs"
	I0315 06:49:12.758382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.31525ms"
	I0315 06:49:12.758483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.717µs"
	I0315 06:49:14.180544       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pk8lw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-pk8lw"
	I0315 06:49:28.841323       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:29.183920       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-763469-m03 event: Removing Node multinode-763469-m03 from Controller"
	I0315 06:49:31.886521       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:49:31.887217       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:31.936734       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.2.0/24"]
	I0315 06:49:34.184705       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-763469-m03 event: Registered Node multinode-763469-m03 in Controller"
	I0315 06:49:39.584936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:45.327140       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:49:49.204609       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-763469-m03 event: Removing Node multinode-763469-m03 from Controller"
	I0315 06:50:24.224415       1 event.go:307] "Event occurred" object="multinode-763469-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-763469-m02 status is now: NodeNotReady"
	I0315 06:50:24.246850       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pk8lw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:50:24.268583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.227982ms"
	I0315 06:50:24.269346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.47µs"
	I0315 06:50:24.269364       1 event.go:307] "Event occurred" object="kube-system/kindnet-zfcwm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:50:24.287624       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-b8jmp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:50:54.118305       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-gg57j"
	I0315 06:50:54.146325       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-gg57j"
	I0315 06:50:54.146375       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-7j4pn"
	I0315 06:50:54.175039       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-7j4pn"
	
	
	==> kube-controller-manager [68a042e4e4694cd1d6a6f17f43bf350a8423a2afa521f80603fb224daf0e2948] <==
	I0315 06:43:15.959005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.230031ms"
	I0315 06:43:15.959217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="88.09µs"
	I0315 06:43:48.083249       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:43:48.084715       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:43:48.122654       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gg57j"
	I0315 06:43:48.132045       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7j4pn"
	I0315 06:43:48.147669       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.2.0/24"]
	I0315 06:43:50.043065       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-763469-m03"
	I0315 06:43:50.043365       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-763469-m03 event: Registered Node multinode-763469-m03 in Controller"
	I0315 06:43:56.480647       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:25.873419       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:28.351506       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-763469-m03\" does not exist"
	I0315 06:44:28.353596       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:44:28.365696       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-763469-m03" podCIDRs=["10.244.3.0/24"]
	I0315 06:44:35.097150       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m02"
	I0315 06:45:15.097991       1 event.go:307] "Event occurred" object="multinode-763469-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-763469-m02 status is now: NodeNotReady"
	I0315 06:45:15.097991       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-763469-m03"
	I0315 06:45:15.113672       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-b8jmp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.128157       1 event.go:307] "Event occurred" object="kube-system/kindnet-zfcwm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.149287       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ktsnt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:15.155969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.577126ms"
	I0315 06:45:15.156438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.555µs"
	I0315 06:45:20.160001       1 event.go:307] "Event occurred" object="multinode-763469-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-763469-m03 status is now: NodeNotReady"
	I0315 06:45:20.171507       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-gg57j" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 06:45:20.186267       1 event.go:307] "Event occurred" object="kube-system/kindnet-7j4pn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [252417b5766a554874f14b35ebf708ec5395610d74b09e94cbe0dbaba3c196cc] <==
	I0315 06:48:22.381854       1 server_others.go:69] "Using iptables proxy"
	I0315 06:48:22.432722       1 node.go:141] Successfully retrieved node IP: 192.168.39.29
	I0315 06:48:22.515752       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:48:22.515926       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:48:22.519374       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:48:22.519490       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:48:22.519825       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:48:22.520540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:48:22.521498       1 config.go:188] "Starting service config controller"
	I0315 06:48:22.521629       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:48:22.521687       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:48:22.521706       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:48:22.522319       1 config.go:315] "Starting node config controller"
	I0315 06:48:22.522873       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:48:22.621878       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:48:22.621963       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:48:22.622964       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [5b1efdd4fe1126c090b968d859a884d33c949ef1c93a76e40f210806d6ef0a0c] <==
	I0315 06:42:21.208019       1 server_others.go:69] "Using iptables proxy"
	I0315 06:42:21.270636       1 node.go:141] Successfully retrieved node IP: 192.168.39.29
	I0315 06:42:21.366306       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 06:42:21.366327       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 06:42:21.373887       1 server_others.go:152] "Using iptables Proxier"
	I0315 06:42:21.375137       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 06:42:21.375967       1 server.go:846] "Version info" version="v1.28.4"
	I0315 06:42:21.375983       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:42:21.378287       1 config.go:188] "Starting service config controller"
	I0315 06:42:21.379114       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 06:42:21.379153       1 config.go:97] "Starting endpoint slice config controller"
	I0315 06:42:21.379159       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 06:42:21.380394       1 config.go:315] "Starting node config controller"
	I0315 06:42:21.380402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 06:42:21.479666       1 shared_informer.go:318] Caches are synced for service config
	I0315 06:42:21.479804       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 06:42:21.480714       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [26e6081b0c5f0b9c8876fed0778ecf3e6c9c238821ec86b3c45ac16d35994886] <==
	E0315 06:42:04.299227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 06:42:04.298403       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:42:04.299340       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:42:04.299464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:42:04.299566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:42:05.136319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 06:42:05.136371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 06:42:05.137580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 06:42:05.137600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 06:42:05.141914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 06:42:05.141957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 06:42:05.166246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 06:42:05.166294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 06:42:05.241160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 06:42:05.241298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 06:42:05.291238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 06:42:05.291380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 06:42:05.377246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 06:42:05.377382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 06:42:05.793414       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 06:42:05.793672       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0315 06:42:08.988496       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 06:46:40.633059       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 06:46:40.633167       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 06:46:40.633483       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b95fae7e21b3c61fa9f54e55e162b15da85d17c45ba50d4d2f010ebafb1bcca0] <==
	I0315 06:48:18.538453       1 serving.go:348] Generated self-signed cert in-memory
	W0315 06:48:21.028335       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 06:48:21.028451       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:48:21.028464       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 06:48:21.028471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 06:48:21.070014       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0315 06:48:21.070057       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:48:21.072525       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 06:48:21.072731       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 06:48:21.072834       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 06:48:21.072881       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 06:48:21.173470       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:50:16 multinode-763469 kubelet[3048]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:50:16 multinode-763469 kubelet[3048]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.517196    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/crio-dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Error finding container dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Status 404 returned error can't find the container with id dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.517613    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podb83f83f07ca3131c46707e11d52155c8/crio-7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Error finding container 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Status 404 returned error can't find the container with id 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.517929    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podffba728ba0f963033d8e304d674bfb10/crio-9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Error finding container 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Status 404 returned error can't find the container with id 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.518152    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4247590d-21e4-4ee1-8989-1cc15ec40318/crio-2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Error finding container 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Status 404 returned error can't find the container with id 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.518515    3048 manager.go:1106] Failed to create existing container: /kubepods/podee5dba32-45a3-44e1-80e2-f585e324cf82/crio-968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Error finding container 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Status 404 returned error can't find the container with id 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.518833    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podddaf1cb0928f1352bca011ce12428363/crio-36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Error finding container 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Status 404 returned error can't find the container with id 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.519071    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod17b13912-e637-4f97-9f58-16a39483c91e/crio-91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Error finding container 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Status 404 returned error can't find the container with id 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.519309    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod40630839-8887-4f18-b35c-e4f1f0e3a513/crio-88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Error finding container 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Status 404 returned error can't find the container with id 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf
	Mar 15 06:50:16 multinode-763469 kubelet[3048]: E0315 06:50:16.519546    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podce435b119544c4c614d66991282e3c51/crio-27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Error finding container 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Status 404 returned error can't find the container with id 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.479460    3048 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 06:51:16 multinode-763469 kubelet[3048]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 06:51:16 multinode-763469 kubelet[3048]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 06:51:16 multinode-763469 kubelet[3048]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 06:51:16 multinode-763469 kubelet[3048]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.517309    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod09a32ea8-f3bb-4ec3-b60f-8a4e1c5c2569/crio-dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Error finding container dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209: Status 404 returned error can't find the container with id dda1a3b456791d73c85e02d9fba4be61dffb888b4d3ddb2a0eeb253482f30209
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.517918    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod17b13912-e637-4f97-9f58-16a39483c91e/crio-91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Error finding container 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1: Status 404 returned error can't find the container with id 91971b4934cb08be6c776892fe91e7947a294797a0ab774cba70d4369bf47ef1
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.518286    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podffba728ba0f963033d8e304d674bfb10/crio-9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Error finding container 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475: Status 404 returned error can't find the container with id 9d323dcc0e31aa00010c169f2ad6fd1c7956093a2f9d82eab87f8a33adfdb475
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.518674    3048 manager.go:1106] Failed to create existing container: /kubepods/podee5dba32-45a3-44e1-80e2-f585e324cf82/crio-968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Error finding container 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f: Status 404 returned error can't find the container with id 968021900c2b12d0ecbe2c988af2fe80477ee1853001428561876160ea39937f
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.519116    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4247590d-21e4-4ee1-8989-1cc15ec40318/crio-2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Error finding container 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4: Status 404 returned error can't find the container with id 2be90b666467667f2c6d2f94157bad3204d03dc2d71730c1ed8b4aab524819f4
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.519455    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podb83f83f07ca3131c46707e11d52155c8/crio-7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Error finding container 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a: Status 404 returned error can't find the container with id 7604fe166214c84499b1168f486b7778e3a0c51e971f04742c5e5e78ba81cb1a
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.519799    3048 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod40630839-8887-4f18-b35c-e4f1f0e3a513/crio-88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Error finding container 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf: Status 404 returned error can't find the container with id 88c340010235209095e066625eca270286ef6a0d17a5afaa942fa31f54b194bf
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.520149    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podce435b119544c4c614d66991282e3c51/crio-27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Error finding container 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f: Status 404 returned error can't find the container with id 27f1756dc37d8564eba5099eb6096a34877749973c66569b1df8851f103abb2f
	Mar 15 06:51:16 multinode-763469 kubelet[3048]: E0315 06:51:16.520500    3048 manager.go:1106] Failed to create existing container: /kubepods/burstable/podddaf1cb0928f1352bca011ce12428363/crio-36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Error finding container 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed: Status 404 returned error can't find the container with id 36458949e6331a81db25d159cc35b5c8fc1dc63816d29b6e8267c2f2a8ef6bed
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 06:52:06.303879   43164 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-763469 -n multinode-763469
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-763469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.58s)

                                                
                                    
x
+
TestPreload (244.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-764289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m43.975605454s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764289 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-764289 image pull gcr.io/k8s-minikube/busybox: (2.717119978s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-764289
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-764289: (7.333981189s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-764289 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0315 06:59:21.072329   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:59:41.577642   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-764289 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.120181071s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764289 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-03-15 06:59:54.588632393 +0000 UTC m=+3839.592341037
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-764289 -n test-preload-764289
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-764289 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-764289 logs -n 25: (1.119598177s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469 sudo cat                                       | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt                       | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m02:/home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n                                                                 | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | multinode-763469-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-763469 ssh -n multinode-763469-m02 sudo cat                                   | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | /home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-763469 node stop m03                                                          | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	| node    | multinode-763469 node start                                                             | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC | 15 Mar 24 06:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| stop    | -p multinode-763469                                                                     | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:44 UTC |                     |
	| start   | -p multinode-763469                                                                     | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:46 UTC | 15 Mar 24 06:49 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC |                     |
	| node    | multinode-763469 node delete                                                            | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC | 15 Mar 24 06:49 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-763469 stop                                                                   | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:49 UTC |                     |
	| start   | -p multinode-763469                                                                     | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:52 UTC | 15 Mar 24 06:55 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-763469                                                                | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC |                     |
	| start   | -p multinode-763469-m02                                                                 | multinode-763469-m02 | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-763469-m03                                                                 | multinode-763469-m03 | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC | 15 Mar 24 06:55 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-763469                                                                 | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC |                     |
	| delete  | -p multinode-763469-m03                                                                 | multinode-763469-m03 | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC | 15 Mar 24 06:55 UTC |
	| delete  | -p multinode-763469                                                                     | multinode-763469     | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC | 15 Mar 24 06:55 UTC |
	| start   | -p test-preload-764289                                                                  | test-preload-764289  | jenkins | v1.32.0 | 15 Mar 24 06:55 UTC | 15 Mar 24 06:58 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-764289 image pull                                                          | test-preload-764289  | jenkins | v1.32.0 | 15 Mar 24 06:58 UTC | 15 Mar 24 06:58 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-764289                                                                  | test-preload-764289  | jenkins | v1.32.0 | 15 Mar 24 06:58 UTC | 15 Mar 24 06:58 UTC |
	| start   | -p test-preload-764289                                                                  | test-preload-764289  | jenkins | v1.32.0 | 15 Mar 24 06:58 UTC | 15 Mar 24 06:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-764289 image list                                                          | test-preload-764289  | jenkins | v1.32.0 | 15 Mar 24 06:59 UTC | 15 Mar 24 06:59 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 06:58:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 06:58:47.291464   45374 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:58:47.291578   45374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:58:47.291587   45374 out.go:304] Setting ErrFile to fd 2...
	I0315 06:58:47.291591   45374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:58:47.291765   45374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:58:47.292267   45374 out.go:298] Setting JSON to false
	I0315 06:58:47.293150   45374 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6024,"bootTime":1710479904,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:58:47.293203   45374 start.go:139] virtualization: kvm guest
	I0315 06:58:47.295379   45374 out.go:177] * [test-preload-764289] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:58:47.296733   45374 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:58:47.296733   45374 notify.go:220] Checking for updates...
	I0315 06:58:47.298093   45374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:58:47.299592   45374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:58:47.301057   45374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:58:47.302495   45374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:58:47.303835   45374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:58:47.305654   45374 config.go:182] Loaded profile config "test-preload-764289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0315 06:58:47.306062   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:58:47.306125   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:58:47.320548   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0315 06:58:47.320918   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:58:47.321422   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:58:47.321444   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:58:47.321764   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:58:47.321985   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:58:47.323819   45374 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0315 06:58:47.325217   45374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:58:47.325495   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:58:47.325532   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:58:47.339779   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42841
	I0315 06:58:47.340191   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:58:47.340652   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:58:47.340677   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:58:47.340997   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:58:47.341144   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:58:47.375337   45374 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:58:47.376619   45374 start.go:297] selected driver: kvm2
	I0315 06:58:47.376643   45374 start.go:901] validating driver "kvm2" against &{Name:test-preload-764289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-764289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:58:47.376775   45374 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:58:47.377458   45374 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:58:47.377535   45374 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 06:58:47.391981   45374 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 06:58:47.392317   45374 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:58:47.392381   45374 cni.go:84] Creating CNI manager for ""
	I0315 06:58:47.392400   45374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 06:58:47.392476   45374 start.go:340] cluster config:
	{Name:test-preload-764289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-764289 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:58:47.392588   45374 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 06:58:47.394306   45374 out.go:177] * Starting "test-preload-764289" primary control-plane node in "test-preload-764289" cluster
	I0315 06:58:47.395520   45374 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0315 06:58:47.495456   45374 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:58:47.495487   45374 cache.go:56] Caching tarball of preloaded images
	I0315 06:58:47.495650   45374 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0315 06:58:47.497323   45374 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0315 06:58:47.498411   45374 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 06:58:47.602252   45374 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0315 06:58:58.854283   45374 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 06:58:58.854378   45374 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 06:58:59.818426   45374 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0315 06:58:59.818572   45374 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/config.json ...
	I0315 06:58:59.818827   45374 start.go:360] acquireMachinesLock for test-preload-764289: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 06:58:59.818896   45374 start.go:364] duration metric: took 45.198µs to acquireMachinesLock for "test-preload-764289"
	I0315 06:58:59.818917   45374 start.go:96] Skipping create...Using existing machine configuration
	I0315 06:58:59.818922   45374 fix.go:54] fixHost starting: 
	I0315 06:58:59.819223   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:58:59.819259   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:58:59.833688   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0315 06:58:59.834156   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:58:59.834664   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:58:59.834687   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:58:59.834993   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:58:59.835172   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:58:59.835338   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetState
	I0315 06:58:59.837010   45374 fix.go:112] recreateIfNeeded on test-preload-764289: state=Stopped err=<nil>
	I0315 06:58:59.837030   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	W0315 06:58:59.837176   45374 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 06:58:59.839860   45374 out.go:177] * Restarting existing kvm2 VM for "test-preload-764289" ...
	I0315 06:58:59.841400   45374 main.go:141] libmachine: (test-preload-764289) Calling .Start
	I0315 06:58:59.841590   45374 main.go:141] libmachine: (test-preload-764289) Ensuring networks are active...
	I0315 06:58:59.842550   45374 main.go:141] libmachine: (test-preload-764289) Ensuring network default is active
	I0315 06:58:59.842913   45374 main.go:141] libmachine: (test-preload-764289) Ensuring network mk-test-preload-764289 is active
	I0315 06:58:59.843448   45374 main.go:141] libmachine: (test-preload-764289) Getting domain xml...
	I0315 06:58:59.844605   45374 main.go:141] libmachine: (test-preload-764289) Creating domain...
	I0315 06:59:01.019292   45374 main.go:141] libmachine: (test-preload-764289) Waiting to get IP...
	I0315 06:59:01.020263   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:01.020645   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:01.020734   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:01.020641   45442 retry.go:31] will retry after 229.93114ms: waiting for machine to come up
	I0315 06:59:01.252287   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:01.252751   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:01.252802   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:01.252727   45442 retry.go:31] will retry after 304.263814ms: waiting for machine to come up
	I0315 06:59:01.558390   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:01.558876   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:01.558909   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:01.558846   45442 retry.go:31] will retry after 315.709479ms: waiting for machine to come up
	I0315 06:59:01.876234   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:01.876682   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:01.876709   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:01.876625   45442 retry.go:31] will retry after 410.920209ms: waiting for machine to come up
	I0315 06:59:02.289267   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:02.289759   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:02.289791   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:02.289708   45442 retry.go:31] will retry after 607.430991ms: waiting for machine to come up
	I0315 06:59:02.898439   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:02.898819   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:02.898848   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:02.898766   45442 retry.go:31] will retry after 684.154611ms: waiting for machine to come up
	I0315 06:59:03.584802   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:03.585247   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:03.585268   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:03.585198   45442 retry.go:31] will retry after 930.777102ms: waiting for machine to come up
	I0315 06:59:04.517233   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:04.517812   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:04.517840   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:04.517762   45442 retry.go:31] will retry after 1.260274906s: waiting for machine to come up
	I0315 06:59:05.779570   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:05.779977   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:05.780005   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:05.779932   45442 retry.go:31] will retry after 1.454098147s: waiting for machine to come up
	I0315 06:59:07.236511   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:07.236975   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:07.237013   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:07.236967   45442 retry.go:31] will retry after 1.604480437s: waiting for machine to come up
	I0315 06:59:08.843716   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:08.844132   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:08.844160   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:08.844049   45442 retry.go:31] will retry after 2.798527568s: waiting for machine to come up
	I0315 06:59:11.644612   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:11.645010   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:11.645038   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:11.644960   45442 retry.go:31] will retry after 2.808985904s: waiting for machine to come up
	I0315 06:59:14.455876   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:14.456402   45374 main.go:141] libmachine: (test-preload-764289) DBG | unable to find current IP address of domain test-preload-764289 in network mk-test-preload-764289
	I0315 06:59:14.456426   45374 main.go:141] libmachine: (test-preload-764289) DBG | I0315 06:59:14.456357   45442 retry.go:31] will retry after 3.083110045s: waiting for machine to come up
	I0315 06:59:17.541706   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.542140   45374 main.go:141] libmachine: (test-preload-764289) Found IP for machine: 192.168.39.186
	I0315 06:59:17.542166   45374 main.go:141] libmachine: (test-preload-764289) Reserving static IP address...
	I0315 06:59:17.542184   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has current primary IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.542706   45374 main.go:141] libmachine: (test-preload-764289) Reserved static IP address: 192.168.39.186
	I0315 06:59:17.542730   45374 main.go:141] libmachine: (test-preload-764289) Waiting for SSH to be available...
	I0315 06:59:17.542754   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "test-preload-764289", mac: "52:54:00:ad:51:0e", ip: "192.168.39.186"} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.542776   45374 main.go:141] libmachine: (test-preload-764289) DBG | skip adding static IP to network mk-test-preload-764289 - found existing host DHCP lease matching {name: "test-preload-764289", mac: "52:54:00:ad:51:0e", ip: "192.168.39.186"}
	I0315 06:59:17.542792   45374 main.go:141] libmachine: (test-preload-764289) DBG | Getting to WaitForSSH function...
	I0315 06:59:17.544819   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.545190   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.545216   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.545258   45374 main.go:141] libmachine: (test-preload-764289) DBG | Using SSH client type: external
	I0315 06:59:17.545276   45374 main.go:141] libmachine: (test-preload-764289) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa (-rw-------)
	I0315 06:59:17.545315   45374 main.go:141] libmachine: (test-preload-764289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 06:59:17.545329   45374 main.go:141] libmachine: (test-preload-764289) DBG | About to run SSH command:
	I0315 06:59:17.545342   45374 main.go:141] libmachine: (test-preload-764289) DBG | exit 0
	I0315 06:59:17.673046   45374 main.go:141] libmachine: (test-preload-764289) DBG | SSH cmd err, output: <nil>: 
	I0315 06:59:17.673546   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetConfigRaw
	I0315 06:59:17.674218   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetIP
	I0315 06:59:17.676836   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.677286   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.677315   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.677512   45374 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/config.json ...
	I0315 06:59:17.677701   45374 machine.go:94] provisionDockerMachine start ...
	I0315 06:59:17.677718   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:17.677927   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:17.680152   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.680500   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.680523   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.680641   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:17.680822   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.680976   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.681109   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:17.681271   45374 main.go:141] libmachine: Using SSH client type: native
	I0315 06:59:17.681451   45374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0315 06:59:17.681461   45374 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 06:59:17.789377   45374 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 06:59:17.789407   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetMachineName
	I0315 06:59:17.789647   45374 buildroot.go:166] provisioning hostname "test-preload-764289"
	I0315 06:59:17.789671   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetMachineName
	I0315 06:59:17.789836   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:17.792917   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.793309   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.793342   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.793489   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:17.793719   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.793892   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.794071   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:17.794253   45374 main.go:141] libmachine: Using SSH client type: native
	I0315 06:59:17.794430   45374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0315 06:59:17.794447   45374 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-764289 && echo "test-preload-764289" | sudo tee /etc/hostname
	I0315 06:59:17.922881   45374 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-764289
	
	I0315 06:59:17.922908   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:17.925893   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.926343   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:17.926365   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:17.926537   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:17.926723   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.926919   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:17.927068   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:17.927320   45374 main.go:141] libmachine: Using SSH client type: native
	I0315 06:59:17.927482   45374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0315 06:59:17.927502   45374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-764289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-764289/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-764289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 06:59:18.047375   45374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 06:59:18.047409   45374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 06:59:18.047437   45374 buildroot.go:174] setting up certificates
	I0315 06:59:18.047449   45374 provision.go:84] configureAuth start
	I0315 06:59:18.047464   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetMachineName
	I0315 06:59:18.047764   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetIP
	I0315 06:59:18.050495   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.050841   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.050880   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.050987   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.053104   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.053513   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.053545   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.053665   45374 provision.go:143] copyHostCerts
	I0315 06:59:18.053718   45374 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 06:59:18.053729   45374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 06:59:18.053792   45374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 06:59:18.053883   45374 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 06:59:18.053892   45374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 06:59:18.053917   45374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 06:59:18.053969   45374 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 06:59:18.053976   45374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 06:59:18.053994   45374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 06:59:18.054042   45374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.test-preload-764289 san=[127.0.0.1 192.168.39.186 localhost minikube test-preload-764289]
	I0315 06:59:18.171445   45374 provision.go:177] copyRemoteCerts
	I0315 06:59:18.171506   45374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 06:59:18.171536   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.174326   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.174674   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.174706   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.174877   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.175055   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.175174   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.175304   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:18.260198   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 06:59:18.287893   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0315 06:59:18.314695   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 06:59:18.341091   45374 provision.go:87] duration metric: took 293.623764ms to configureAuth
	I0315 06:59:18.341144   45374 buildroot.go:189] setting minikube options for container-runtime
	I0315 06:59:18.341347   45374 config.go:182] Loaded profile config "test-preload-764289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0315 06:59:18.341429   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.344249   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.344696   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.344726   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.344951   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.345167   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.345323   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.345424   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.345623   45374 main.go:141] libmachine: Using SSH client type: native
	I0315 06:59:18.345781   45374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0315 06:59:18.345800   45374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 06:59:18.633256   45374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 06:59:18.633284   45374 machine.go:97] duration metric: took 955.572273ms to provisionDockerMachine
	I0315 06:59:18.633297   45374 start.go:293] postStartSetup for "test-preload-764289" (driver="kvm2")
	I0315 06:59:18.633307   45374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 06:59:18.633327   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:18.633720   45374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 06:59:18.633747   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.636385   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.636750   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.636776   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.636932   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.637109   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.637293   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.637422   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:18.724165   45374 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 06:59:18.728592   45374 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 06:59:18.728621   45374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 06:59:18.728684   45374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 06:59:18.728765   45374 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 06:59:18.728851   45374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 06:59:18.738459   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:59:18.764230   45374 start.go:296] duration metric: took 130.918894ms for postStartSetup
	I0315 06:59:18.764271   45374 fix.go:56] duration metric: took 18.945348598s for fixHost
	I0315 06:59:18.764293   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.766879   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.767254   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.767283   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.767439   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.767641   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.767832   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.767977   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.768157   45374 main.go:141] libmachine: Using SSH client type: native
	I0315 06:59:18.768331   45374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0315 06:59:18.768343   45374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 06:59:18.877422   45374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710485958.846121241
	
	I0315 06:59:18.877457   45374 fix.go:216] guest clock: 1710485958.846121241
	I0315 06:59:18.877464   45374 fix.go:229] Guest: 2024-03-15 06:59:18.846121241 +0000 UTC Remote: 2024-03-15 06:59:18.764274727 +0000 UTC m=+31.518598414 (delta=81.846514ms)
	I0315 06:59:18.877482   45374 fix.go:200] guest clock delta is within tolerance: 81.846514ms
	I0315 06:59:18.877486   45374 start.go:83] releasing machines lock for "test-preload-764289", held for 19.058577095s
	I0315 06:59:18.877502   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:18.877753   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetIP
	I0315 06:59:18.880202   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.880549   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.880574   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.880737   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:18.881283   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:18.881468   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:18.881578   45374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 06:59:18.881619   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.881695   45374 ssh_runner.go:195] Run: cat /version.json
	I0315 06:59:18.881721   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:18.884147   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.884536   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.884564   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.884591   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.884719   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.884911   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.885039   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:18.885064   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:18.885086   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.885264   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:18.885301   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:18.885402   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:18.885522   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:18.885683   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:18.962260   45374 ssh_runner.go:195] Run: systemctl --version
	I0315 06:59:19.002479   45374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 06:59:19.146140   45374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 06:59:19.152712   45374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 06:59:19.152771   45374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 06:59:19.168997   45374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 06:59:19.169022   45374 start.go:494] detecting cgroup driver to use...
	I0315 06:59:19.169077   45374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 06:59:19.185221   45374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 06:59:19.199200   45374 docker.go:217] disabling cri-docker service (if available) ...
	I0315 06:59:19.199266   45374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 06:59:19.213313   45374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 06:59:19.227169   45374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 06:59:19.339244   45374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 06:59:19.486580   45374 docker.go:233] disabling docker service ...
	I0315 06:59:19.486642   45374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 06:59:19.502319   45374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 06:59:19.516359   45374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 06:59:19.675396   45374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 06:59:19.785421   45374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 06:59:19.799645   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 06:59:19.819380   45374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0315 06:59:19.819432   45374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:59:19.829832   45374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 06:59:19.829884   45374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:59:19.840299   45374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:59:19.850514   45374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 06:59:19.861000   45374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 06:59:19.871801   45374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 06:59:19.881290   45374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 06:59:19.881345   45374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 06:59:19.893936   45374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 06:59:19.903567   45374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:59:20.015463   45374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 06:59:20.155796   45374 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 06:59:20.155869   45374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 06:59:20.161133   45374 start.go:562] Will wait 60s for crictl version
	I0315 06:59:20.161199   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:20.165568   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 06:59:20.205920   45374 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 06:59:20.206004   45374 ssh_runner.go:195] Run: crio --version
	I0315 06:59:20.236410   45374 ssh_runner.go:195] Run: crio --version
	I0315 06:59:20.267907   45374 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0315 06:59:20.269458   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetIP
	I0315 06:59:20.271851   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:20.272123   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:20.272145   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:20.272380   45374 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 06:59:20.276936   45374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:59:20.289528   45374 kubeadm.go:877] updating cluster {Name:test-preload-764289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-764289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 06:59:20.289645   45374 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0315 06:59:20.289686   45374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:59:20.327189   45374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0315 06:59:20.327244   45374 ssh_runner.go:195] Run: which lz4
	I0315 06:59:20.331279   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 06:59:20.335603   45374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 06:59:20.335633   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0315 06:59:22.074164   45374 crio.go:444] duration metric: took 1.742904717s to copy over tarball
	I0315 06:59:22.074229   45374 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 06:59:24.579382   45374 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.505129895s)
	I0315 06:59:24.579421   45374 crio.go:451] duration metric: took 2.505226946s to extract the tarball
	I0315 06:59:24.579430   45374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 06:59:24.621408   45374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 06:59:24.668557   45374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0315 06:59:24.668601   45374 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 06:59:24.668663   45374 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:59:24.668712   45374 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0315 06:59:24.668740   45374 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0315 06:59:24.668773   45374 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0315 06:59:24.668723   45374 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0315 06:59:24.668896   45374 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0315 06:59:24.668914   45374 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0315 06:59:24.668949   45374 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0315 06:59:24.670231   45374 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0315 06:59:24.670252   45374 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0315 06:59:24.670255   45374 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:59:24.670235   45374 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0315 06:59:24.670234   45374 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0315 06:59:24.670234   45374 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0315 06:59:24.670230   45374 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0315 06:59:24.670442   45374 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0315 06:59:24.876781   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0315 06:59:24.900960   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0315 06:59:24.916019   45374 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0315 06:59:24.916065   45374 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0315 06:59:24.916130   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:24.919046   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0315 06:59:24.952331   45374 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0315 06:59:24.952388   45374 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0315 06:59:24.952388   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0315 06:59:24.952434   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:24.981327   45374 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0315 06:59:24.981369   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0315 06:59:24.981371   45374 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0315 06:59:24.981398   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:25.006311   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0315 06:59:25.006407   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0315 06:59:25.029567   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0315 06:59:25.029645   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0315 06:59:25.029662   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0315 06:59:25.029690   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0315 06:59:25.029705   45374 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0315 06:59:25.029750   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0315 06:59:25.033807   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0315 06:59:25.034069   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0315 06:59:25.036881   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0315 06:59:25.063248   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0315 06:59:25.139419   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0315 06:59:25.139533   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0315 06:59:25.619067   45374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:59:27.774831   45374 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.745057963s)
	I0315 06:59:27.774874   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0315 06:59:27.774950   45374 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.745268964s)
	I0315 06:59:27.774983   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0315 06:59:27.774995   45374 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0315 06:59:27.775010   45374 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (2.741176843s)
	I0315 06:59:27.775048   45374 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0315 06:59:27.775066   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0315 06:59:27.775074   45374 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (2.740982988s)
	I0315 06:59:27.775089   45374 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0315 06:59:27.775102   45374 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0315 06:59:27.775123   45374 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0315 06:59:27.775136   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:27.775159   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:27.775189   45374 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (2.738283301s)
	I0315 06:59:27.775214   45374 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (2.711937049s)
	I0315 06:59:27.775230   45374 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0315 06:59:27.775246   45374 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0315 06:59:27.775252   45374 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0315 06:59:27.775262   45374 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0315 06:59:27.775287   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:27.775294   45374 ssh_runner.go:195] Run: which crictl
	I0315 06:59:27.775334   45374 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.635784945s)
	I0315 06:59:27.775355   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0315 06:59:27.775417   45374 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.15630719s)
	I0315 06:59:27.931389   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0315 06:59:27.931426   45374 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0315 06:59:27.931490   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0315 06:59:27.931493   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0315 06:59:27.931551   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0315 06:59:27.931616   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0315 06:59:27.931666   45374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0315 06:59:28.057008   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0315 06:59:28.057106   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0315 06:59:28.057121   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0315 06:59:28.057135   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0315 06:59:28.057164   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0315 06:59:28.057229   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0315 06:59:28.825579   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0315 06:59:28.825679   45374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0315 06:59:28.825777   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0315 06:59:28.825805   45374 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0315 06:59:28.825824   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0315 06:59:28.825838   45374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0315 06:59:28.825891   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0315 06:59:28.825841   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0315 06:59:29.274333   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0315 06:59:29.274386   45374 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0315 06:59:29.274413   45374 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0315 06:59:29.274433   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0315 06:59:31.536595   45374 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.262113588s)
	I0315 06:59:31.536631   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0315 06:59:31.536663   45374 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0315 06:59:31.536742   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0315 06:59:31.981700   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0315 06:59:31.981747   45374 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0315 06:59:31.981791   45374 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0315 06:59:32.725353   45374 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0315 06:59:32.725395   45374 cache_images.go:123] Successfully loaded all cached images
	I0315 06:59:32.725400   45374 cache_images.go:92] duration metric: took 8.056787782s to LoadCachedImages
	I0315 06:59:32.725411   45374 kubeadm.go:928] updating node { 192.168.39.186 8443 v1.24.4 crio true true} ...
	I0315 06:59:32.725523   45374 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-764289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-764289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 06:59:32.725599   45374 ssh_runner.go:195] Run: crio config
	I0315 06:59:32.778001   45374 cni.go:84] Creating CNI manager for ""
	I0315 06:59:32.778023   45374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 06:59:32.778034   45374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 06:59:32.778050   45374 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-764289 NodeName:test-preload-764289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 06:59:32.778187   45374 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-764289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 06:59:32.778251   45374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0315 06:59:32.788248   45374 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 06:59:32.788301   45374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 06:59:32.797857   45374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0315 06:59:32.816302   45374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 06:59:32.834018   45374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0315 06:59:32.852942   45374 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0315 06:59:32.857138   45374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 06:59:32.869616   45374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:59:33.012815   45374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:59:33.031455   45374 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289 for IP: 192.168.39.186
	I0315 06:59:33.031473   45374 certs.go:194] generating shared ca certs ...
	I0315 06:59:33.031486   45374 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:59:33.031671   45374 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 06:59:33.031723   45374 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 06:59:33.031737   45374 certs.go:256] generating profile certs ...
	I0315 06:59:33.031997   45374 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/client.key
	I0315 06:59:33.032071   45374 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/apiserver.key.b4ab241d
	I0315 06:59:33.032111   45374 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/proxy-client.key
	I0315 06:59:33.032249   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 06:59:33.032288   45374 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 06:59:33.032302   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 06:59:33.032350   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 06:59:33.032385   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 06:59:33.032408   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 06:59:33.032446   45374 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 06:59:33.033259   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 06:59:33.065176   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 06:59:33.105892   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 06:59:33.143580   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 06:59:33.187242   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 06:59:33.224981   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 06:59:33.251420   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 06:59:33.277939   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 06:59:33.304138   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 06:59:33.330454   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 06:59:33.356484   45374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 06:59:33.384009   45374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 06:59:33.403914   45374 ssh_runner.go:195] Run: openssl version
	I0315 06:59:33.409943   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 06:59:33.423349   45374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 06:59:33.428334   45374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 06:59:33.428387   45374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 06:59:33.434627   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 06:59:33.448436   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 06:59:33.462123   45374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:59:33.467464   45374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:59:33.467516   45374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 06:59:33.474004   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 06:59:33.487416   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 06:59:33.501035   45374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 06:59:33.506225   45374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 06:59:33.506313   45374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 06:59:33.512555   45374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 06:59:33.525787   45374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 06:59:33.530760   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 06:59:33.537198   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 06:59:33.543537   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 06:59:33.550553   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 06:59:33.557146   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 06:59:33.563747   45374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 06:59:33.570280   45374 kubeadm.go:391] StartCluster: {Name:test-preload-764289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-764289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:59:33.570364   45374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 06:59:33.570417   45374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:59:33.624585   45374 cri.go:89] found id: ""
	I0315 06:59:33.624655   45374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 06:59:33.638866   45374 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 06:59:33.638883   45374 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 06:59:33.638887   45374 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 06:59:33.638940   45374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 06:59:33.649623   45374 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:59:33.649996   45374 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-764289" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:59:33.650085   45374 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-764289" cluster setting kubeconfig missing "test-preload-764289" context setting]
	I0315 06:59:33.650364   45374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:59:33.650950   45374 kapi.go:59] client config for test-preload-764289: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 06:59:33.651497   45374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 06:59:33.661629   45374 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.186
	I0315 06:59:33.661661   45374 kubeadm.go:1154] stopping kube-system containers ...
	I0315 06:59:33.661671   45374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 06:59:33.661723   45374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 06:59:33.706650   45374 cri.go:89] found id: ""
	I0315 06:59:33.706720   45374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 06:59:33.725402   45374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 06:59:33.735861   45374 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 06:59:33.735884   45374 kubeadm.go:156] found existing configuration files:
	
	I0315 06:59:33.735924   45374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 06:59:33.745593   45374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 06:59:33.745663   45374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 06:59:33.755765   45374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 06:59:33.765416   45374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 06:59:33.765477   45374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 06:59:33.775298   45374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 06:59:33.785376   45374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 06:59:33.785450   45374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 06:59:33.796024   45374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 06:59:33.805785   45374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 06:59:33.805835   45374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 06:59:33.816017   45374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 06:59:33.826197   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:33.921080   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:34.420011   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:34.697312   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:34.761130   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:34.815680   45374 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:59:34.815778   45374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:59:35.315803   45374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:59:35.815834   45374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:59:35.867385   45374 api_server.go:72] duration metric: took 1.051708019s to wait for apiserver process to appear ...
	I0315 06:59:35.867410   45374 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:59:35.867439   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:35.867880   45374 api_server.go:269] stopped: https://192.168.39.186:8443/healthz: Get "https://192.168.39.186:8443/healthz": dial tcp 192.168.39.186:8443: connect: connection refused
	I0315 06:59:36.367746   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:39.530064   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 06:59:39.530099   45374 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 06:59:39.530118   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:39.574174   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 06:59:39.574205   45374 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 06:59:39.868261   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:39.873769   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 06:59:39.873799   45374 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 06:59:40.368446   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:40.374034   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 06:59:40.374057   45374 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 06:59:40.867597   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:40.880591   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 06:59:40.880622   45374 api_server.go:103] status: https://192.168.39.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 06:59:41.368267   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:41.376562   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0315 06:59:41.383567   45374 api_server.go:141] control plane version: v1.24.4
	I0315 06:59:41.383595   45374 api_server.go:131] duration metric: took 5.516178574s to wait for apiserver health ...
	I0315 06:59:41.383603   45374 cni.go:84] Creating CNI manager for ""
	I0315 06:59:41.383609   45374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 06:59:41.385768   45374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 06:59:41.387484   45374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 06:59:41.405801   45374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 06:59:41.428961   45374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:59:41.441105   45374 system_pods.go:59] 7 kube-system pods found
	I0315 06:59:41.441136   45374 system_pods.go:61] "coredns-6d4b75cb6d-vp6n7" [6b5386a0-d8aa-47d9-b61b-c691f0fcccbc] Running
	I0315 06:59:41.441145   45374 system_pods.go:61] "etcd-test-preload-764289" [d44df19c-3cb8-4f18-98b8-9b0ade315331] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 06:59:41.441150   45374 system_pods.go:61] "kube-apiserver-test-preload-764289" [d574b132-c89e-4246-b949-452249822e8a] Running
	I0315 06:59:41.441158   45374 system_pods.go:61] "kube-controller-manager-test-preload-764289" [475c0b02-3fb8-4e0a-97fe-4fa668b31201] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 06:59:41.441163   45374 system_pods.go:61] "kube-proxy-kgr42" [f7d6bac1-8827-4f81-a2bd-df51ca112359] Running
	I0315 06:59:41.441167   45374 system_pods.go:61] "kube-scheduler-test-preload-764289" [3777c82c-7c2a-4e7d-b3b5-3c3cc9983416] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 06:59:41.441183   45374 system_pods.go:61] "storage-provisioner" [0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 06:59:41.441190   45374 system_pods.go:74] duration metric: took 12.206893ms to wait for pod list to return data ...
	I0315 06:59:41.441196   45374 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:59:41.447153   45374 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:59:41.447189   45374 node_conditions.go:123] node cpu capacity is 2
	I0315 06:59:41.447201   45374 node_conditions.go:105] duration metric: took 5.999296ms to run NodePressure ...
	I0315 06:59:41.447221   45374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 06:59:41.678518   45374 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 06:59:41.685068   45374 kubeadm.go:733] kubelet initialised
	I0315 06:59:41.685095   45374 kubeadm.go:734] duration metric: took 6.546285ms waiting for restarted kubelet to initialise ...
	I0315 06:59:41.685105   45374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:59:41.695944   45374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:41.704138   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.704168   45374 pod_ready.go:81] duration metric: took 8.197838ms for pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:41.704179   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.704189   45374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:41.709224   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "etcd-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.709252   45374 pod_ready.go:81] duration metric: took 5.052412ms for pod "etcd-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:41.709326   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "etcd-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.709345   45374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:41.716959   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "kube-apiserver-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.716999   45374 pod_ready.go:81] duration metric: took 7.638152ms for pod "kube-apiserver-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:41.717013   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "kube-apiserver-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.717022   45374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:41.833818   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.833857   45374 pod_ready.go:81] duration metric: took 116.824206ms for pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:41.833869   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:41.833884   45374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kgr42" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:42.233490   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "kube-proxy-kgr42" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:42.233543   45374 pod_ready.go:81] duration metric: took 399.647862ms for pod "kube-proxy-kgr42" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:42.233556   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "kube-proxy-kgr42" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:42.233563   45374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:42.632259   45374 pod_ready.go:97] node "test-preload-764289" hosting pod "kube-scheduler-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:42.632285   45374 pod_ready.go:81] duration metric: took 398.71419ms for pod "kube-scheduler-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	E0315 06:59:42.632297   45374 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-764289" hosting pod "kube-scheduler-test-preload-764289" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:42.632306   45374 pod_ready.go:38] duration metric: took 947.191501ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:59:42.632331   45374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 06:59:42.645326   45374 ops.go:34] apiserver oom_adj: -16
	I0315 06:59:42.645351   45374 kubeadm.go:591] duration metric: took 9.006457834s to restartPrimaryControlPlane
	I0315 06:59:42.645362   45374 kubeadm.go:393] duration metric: took 9.075087452s to StartCluster
	I0315 06:59:42.645394   45374 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:59:42.645480   45374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:59:42.646290   45374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 06:59:42.646546   45374 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 06:59:42.648355   45374 out.go:177] * Verifying Kubernetes components...
	I0315 06:59:42.646623   45374 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 06:59:42.646794   45374 config.go:182] Loaded profile config "test-preload-764289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0315 06:59:42.649628   45374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 06:59:42.649635   45374 addons.go:69] Setting storage-provisioner=true in profile "test-preload-764289"
	I0315 06:59:42.649673   45374 addons.go:234] Setting addon storage-provisioner=true in "test-preload-764289"
	W0315 06:59:42.649683   45374 addons.go:243] addon storage-provisioner should already be in state true
	I0315 06:59:42.649710   45374 host.go:66] Checking if "test-preload-764289" exists ...
	I0315 06:59:42.649635   45374 addons.go:69] Setting default-storageclass=true in profile "test-preload-764289"
	I0315 06:59:42.649763   45374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-764289"
	I0315 06:59:42.650051   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:59:42.650086   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:59:42.650109   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:59:42.650149   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:59:42.665096   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0315 06:59:42.665510   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0315 06:59:42.665689   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:59:42.665865   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:59:42.666363   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:59:42.666389   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:59:42.666496   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:59:42.666519   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:59:42.666760   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:59:42.666816   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:59:42.666970   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetState
	I0315 06:59:42.667290   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:59:42.667328   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:59:42.669502   45374 kapi.go:59] client config for test-preload-764289: &rest.Config{Host:"https://192.168.39.186:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/client.crt", KeyFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/profiles/test-preload-764289/client.key", CAFile:"/home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 06:59:42.669857   45374 addons.go:234] Setting addon default-storageclass=true in "test-preload-764289"
	W0315 06:59:42.669880   45374 addons.go:243] addon default-storageclass should already be in state true
	I0315 06:59:42.669907   45374 host.go:66] Checking if "test-preload-764289" exists ...
	I0315 06:59:42.670282   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:59:42.670324   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:59:42.682779   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0315 06:59:42.683212   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:59:42.683769   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:59:42.683795   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:59:42.684145   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:59:42.684391   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetState
	I0315 06:59:42.685473   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0315 06:59:42.685932   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:59:42.686363   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:42.686424   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:59:42.686441   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:59:42.688248   45374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 06:59:42.686807   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:59:42.689486   45374 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:59:42.689501   45374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 06:59:42.689515   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:42.689852   45374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:59:42.689891   45374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:59:42.692416   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:42.692883   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:42.692907   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:42.693086   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:42.693271   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:42.693448   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:42.693605   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:42.705138   45374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I0315 06:59:42.705497   45374 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:59:42.705975   45374 main.go:141] libmachine: Using API Version  1
	I0315 06:59:42.705996   45374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:59:42.706266   45374 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:59:42.706451   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetState
	I0315 06:59:42.708315   45374 main.go:141] libmachine: (test-preload-764289) Calling .DriverName
	I0315 06:59:42.708599   45374 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 06:59:42.708613   45374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 06:59:42.708628   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHHostname
	I0315 06:59:42.711466   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:42.711949   45374 main.go:141] libmachine: (test-preload-764289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:51:0e", ip: ""} in network mk-test-preload-764289: {Iface:virbr1 ExpiryTime:2024-03-15 07:56:08 +0000 UTC Type:0 Mac:52:54:00:ad:51:0e Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:test-preload-764289 Clientid:01:52:54:00:ad:51:0e}
	I0315 06:59:42.711982   45374 main.go:141] libmachine: (test-preload-764289) DBG | domain test-preload-764289 has defined IP address 192.168.39.186 and MAC address 52:54:00:ad:51:0e in network mk-test-preload-764289
	I0315 06:59:42.712101   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHPort
	I0315 06:59:42.712259   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHKeyPath
	I0315 06:59:42.712397   45374 main.go:141] libmachine: (test-preload-764289) Calling .GetSSHUsername
	I0315 06:59:42.712563   45374 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/test-preload-764289/id_rsa Username:docker}
	I0315 06:59:42.855515   45374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 06:59:42.875570   45374 node_ready.go:35] waiting up to 6m0s for node "test-preload-764289" to be "Ready" ...
	I0315 06:59:42.944016   45374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 06:59:43.042009   45374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 06:59:43.881983   45374 main.go:141] libmachine: Making call to close driver server
	I0315 06:59:43.882003   45374 main.go:141] libmachine: (test-preload-764289) Calling .Close
	I0315 06:59:43.882348   45374 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:59:43.882364   45374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:59:43.882384   45374 main.go:141] libmachine: (test-preload-764289) DBG | Closing plugin on server side
	I0315 06:59:43.882450   45374 main.go:141] libmachine: Making call to close driver server
	I0315 06:59:43.882461   45374 main.go:141] libmachine: (test-preload-764289) Calling .Close
	I0315 06:59:43.882700   45374 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:59:43.882716   45374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:59:43.882738   45374 main.go:141] libmachine: (test-preload-764289) DBG | Closing plugin on server side
	I0315 06:59:43.890015   45374 main.go:141] libmachine: Making call to close driver server
	I0315 06:59:43.890045   45374 main.go:141] libmachine: (test-preload-764289) Calling .Close
	I0315 06:59:43.890369   45374 main.go:141] libmachine: (test-preload-764289) DBG | Closing plugin on server side
	I0315 06:59:43.890392   45374 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:59:43.890402   45374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:59:43.918044   45374 main.go:141] libmachine: Making call to close driver server
	I0315 06:59:43.918067   45374 main.go:141] libmachine: (test-preload-764289) Calling .Close
	I0315 06:59:43.918365   45374 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:59:43.918382   45374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:59:43.918390   45374 main.go:141] libmachine: Making call to close driver server
	I0315 06:59:43.918397   45374 main.go:141] libmachine: (test-preload-764289) Calling .Close
	I0315 06:59:43.918627   45374 main.go:141] libmachine: Successfully made call to close driver server
	I0315 06:59:43.918633   45374 main.go:141] libmachine: (test-preload-764289) DBG | Closing plugin on server side
	I0315 06:59:43.918645   45374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 06:59:43.921124   45374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0315 06:59:43.922272   45374 addons.go:505] duration metric: took 1.275656565s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0315 06:59:44.881702   45374 node_ready.go:53] node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:47.379605   45374 node_ready.go:53] node "test-preload-764289" has status "Ready":"False"
	I0315 06:59:49.883887   45374 node_ready.go:49] node "test-preload-764289" has status "Ready":"True"
	I0315 06:59:49.883919   45374 node_ready.go:38] duration metric: took 7.008318539s for node "test-preload-764289" to be "Ready" ...
	I0315 06:59:49.883930   45374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:59:49.890916   45374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:49.897540   45374 pod_ready.go:92] pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:49.897566   45374 pod_ready.go:81] duration metric: took 6.620186ms for pod "coredns-6d4b75cb6d-vp6n7" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:49.897574   45374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:51.403890   45374 pod_ready.go:92] pod "etcd-test-preload-764289" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:51.403920   45374 pod_ready.go:81] duration metric: took 1.506337987s for pod "etcd-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:51.403932   45374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:51.408995   45374 pod_ready.go:92] pod "kube-apiserver-test-preload-764289" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:51.409017   45374 pod_ready.go:81] duration metric: took 5.078396ms for pod "kube-apiserver-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:51.409026   45374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.415871   45374 pod_ready.go:92] pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:53.415895   45374 pod_ready.go:81] duration metric: took 2.006862969s for pod "kube-controller-manager-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.415909   45374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kgr42" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.422953   45374 pod_ready.go:92] pod "kube-proxy-kgr42" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:53.422976   45374 pod_ready.go:81] duration metric: took 7.059014ms for pod "kube-proxy-kgr42" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.422988   45374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.479655   45374 pod_ready.go:92] pod "kube-scheduler-test-preload-764289" in "kube-system" namespace has status "Ready":"True"
	I0315 06:59:53.479680   45374 pod_ready.go:81] duration metric: took 56.685493ms for pod "kube-scheduler-test-preload-764289" in "kube-system" namespace to be "Ready" ...
	I0315 06:59:53.479697   45374 pod_ready.go:38] duration metric: took 3.59574819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 06:59:53.479713   45374 api_server.go:52] waiting for apiserver process to appear ...
	I0315 06:59:53.479766   45374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:59:53.494624   45374 api_server.go:72] duration metric: took 10.848043869s to wait for apiserver process to appear ...
	I0315 06:59:53.494653   45374 api_server.go:88] waiting for apiserver healthz status ...
	I0315 06:59:53.494675   45374 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0315 06:59:53.500959   45374 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0315 06:59:53.502243   45374 api_server.go:141] control plane version: v1.24.4
	I0315 06:59:53.502265   45374 api_server.go:131] duration metric: took 7.605449ms to wait for apiserver health ...
	I0315 06:59:53.502273   45374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 06:59:53.682318   45374 system_pods.go:59] 7 kube-system pods found
	I0315 06:59:53.682344   45374 system_pods.go:61] "coredns-6d4b75cb6d-vp6n7" [6b5386a0-d8aa-47d9-b61b-c691f0fcccbc] Running
	I0315 06:59:53.682348   45374 system_pods.go:61] "etcd-test-preload-764289" [d44df19c-3cb8-4f18-98b8-9b0ade315331] Running
	I0315 06:59:53.682352   45374 system_pods.go:61] "kube-apiserver-test-preload-764289" [d574b132-c89e-4246-b949-452249822e8a] Running
	I0315 06:59:53.682356   45374 system_pods.go:61] "kube-controller-manager-test-preload-764289" [475c0b02-3fb8-4e0a-97fe-4fa668b31201] Running
	I0315 06:59:53.682364   45374 system_pods.go:61] "kube-proxy-kgr42" [f7d6bac1-8827-4f81-a2bd-df51ca112359] Running
	I0315 06:59:53.682367   45374 system_pods.go:61] "kube-scheduler-test-preload-764289" [3777c82c-7c2a-4e7d-b3b5-3c3cc9983416] Running
	I0315 06:59:53.682370   45374 system_pods.go:61] "storage-provisioner" [0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053] Running
	I0315 06:59:53.682376   45374 system_pods.go:74] duration metric: took 180.097702ms to wait for pod list to return data ...
	I0315 06:59:53.682383   45374 default_sa.go:34] waiting for default service account to be created ...
	I0315 06:59:53.879675   45374 default_sa.go:45] found service account: "default"
	I0315 06:59:53.879714   45374 default_sa.go:55] duration metric: took 197.323496ms for default service account to be created ...
	I0315 06:59:53.879723   45374 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 06:59:54.082245   45374 system_pods.go:86] 7 kube-system pods found
	I0315 06:59:54.082272   45374 system_pods.go:89] "coredns-6d4b75cb6d-vp6n7" [6b5386a0-d8aa-47d9-b61b-c691f0fcccbc] Running
	I0315 06:59:54.082278   45374 system_pods.go:89] "etcd-test-preload-764289" [d44df19c-3cb8-4f18-98b8-9b0ade315331] Running
	I0315 06:59:54.082282   45374 system_pods.go:89] "kube-apiserver-test-preload-764289" [d574b132-c89e-4246-b949-452249822e8a] Running
	I0315 06:59:54.082286   45374 system_pods.go:89] "kube-controller-manager-test-preload-764289" [475c0b02-3fb8-4e0a-97fe-4fa668b31201] Running
	I0315 06:59:54.082294   45374 system_pods.go:89] "kube-proxy-kgr42" [f7d6bac1-8827-4f81-a2bd-df51ca112359] Running
	I0315 06:59:54.082301   45374 system_pods.go:89] "kube-scheduler-test-preload-764289" [3777c82c-7c2a-4e7d-b3b5-3c3cc9983416] Running
	I0315 06:59:54.082305   45374 system_pods.go:89] "storage-provisioner" [0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053] Running
	I0315 06:59:54.082311   45374 system_pods.go:126] duration metric: took 202.583134ms to wait for k8s-apps to be running ...
	I0315 06:59:54.082318   45374 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 06:59:54.082359   45374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:59:54.098883   45374 system_svc.go:56] duration metric: took 16.558598ms WaitForService to wait for kubelet
	I0315 06:59:54.098914   45374 kubeadm.go:576] duration metric: took 11.45234068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 06:59:54.098931   45374 node_conditions.go:102] verifying NodePressure condition ...
	I0315 06:59:54.279708   45374 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 06:59:54.279733   45374 node_conditions.go:123] node cpu capacity is 2
	I0315 06:59:54.279742   45374 node_conditions.go:105] duration metric: took 180.807752ms to run NodePressure ...
	I0315 06:59:54.279753   45374 start.go:240] waiting for startup goroutines ...
	I0315 06:59:54.279760   45374 start.go:245] waiting for cluster config update ...
	I0315 06:59:54.279768   45374 start.go:254] writing updated cluster config ...
	I0315 06:59:54.280003   45374 ssh_runner.go:195] Run: rm -f paused
	I0315 06:59:54.327920   45374 start.go:600] kubectl: 1.29.2, cluster: 1.24.4 (minor skew: 5)
	I0315 06:59:54.330063   45374 out.go:177] 
	W0315 06:59:54.331582   45374 out.go:239] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0315 06:59:54.333093   45374 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0315 06:59:54.334453   45374 out.go:177] * Done! kubectl is now configured to use "test-preload-764289" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.259419280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485995259392000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec83254a-6404-4ead-a907-dd3cc2f7c072 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.259937096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6f1a4a2-f429-480d-b712-d178d33d98a2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.260016296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6f1a4a2-f429-480d-b712-d178d33d98a2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.260915931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c58e7cde872f44aa35494d8b3046d9f3eb57a5e07932a0e3fa3a09f6ba4d877b,PodSandboxId:546b134832b8ff14639e1201ace20e447b4b865d6cf4d3535ab8d61e28552c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710485987890238539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vp6n7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5386a0-d8aa-47d9-b61b-c691f0fcccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d45fa43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb13f59cb7a18064c4107c1a1a3d27db73204b11c95541c02307dd4a2e2b0be,PodSandboxId:463367b412720657bcb83120463786b01bac009665eb23a4f29b20dfa8ec0f4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485980891727598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053,},Annotations:map[string]string{io.kubernetes.container.hash: 3c93fd02,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d018d0e3549b92f69f6c16285fef5ad40b0301a4917d4564ab027783f2d12af,PodSandboxId:bdad0839eacb0202c51426fa255c1d61b950ad56886f24a4a9d06227237ccf00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710485980596895476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgr42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7
d6bac1-8827-4f81-a2bd-df51ca112359,},Annotations:map[string]string{io.kubernetes.container.hash: 6d0f186f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e122a9acddb1192a4e2b1fe5d2f667dac488cd531afe1dbab757b3425049901,PodSandboxId:4855fdef6cf28d0aaee19e46a56da26d7fb32095490bf11b614d5754c4a6fecb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710485975636692092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b49dbbd
3e45bcdb35ade95885ce199,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a4a1b70a34d046ce9a886f15a5af51fe494a5579f2b971d31cb8cf0b1ba4d0,PodSandboxId:714f36871e61596d705802b0f62f5bd3d0d5a27b68669bec4b9c40617e455b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710485975576715700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c1ecee6a2e4d10fef4c33519353b2b56,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3030153d5b1fe071685f1d8bd9c5b580b403e3fbd3b83075aa18a9335c8cdf93,PodSandboxId:c83c7c7dbdfe287ea4d03282b8d13c9446e8eb741f36478c9bbf1010b766ba5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710485975568453303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f5aed14e66b8a09346f0f00cf663f,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5fed753d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b9c0a06e67bced38aaceefdbb2be8a897d80c29f46236b9c4b3778009727d5,PodSandboxId:619f49d281acdf14c93eb01715b4af5889aa83513a03cf0c5b5687182da7e06a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710485975536853365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c364e6d93eb9ad132a0564c66f4f73a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6f1a4a2-f429-480d-b712-d178d33d98a2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.303804229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95a587c5-3a25-49c8-9b30-038fe6a36b79 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.303876115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95a587c5-3a25-49c8-9b30-038fe6a36b79 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.306005253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=338194c9-618a-46b0-b5eb-bb32f4b4db84 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.306535636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485995306509518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=338194c9-618a-46b0-b5eb-bb32f4b4db84 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.307136354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb6f1fbb-e6f4-4644-b2cb-61c17c7e35a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.307208421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb6f1fbb-e6f4-4644-b2cb-61c17c7e35a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.307375192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c58e7cde872f44aa35494d8b3046d9f3eb57a5e07932a0e3fa3a09f6ba4d877b,PodSandboxId:546b134832b8ff14639e1201ace20e447b4b865d6cf4d3535ab8d61e28552c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710485987890238539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vp6n7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5386a0-d8aa-47d9-b61b-c691f0fcccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d45fa43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb13f59cb7a18064c4107c1a1a3d27db73204b11c95541c02307dd4a2e2b0be,PodSandboxId:463367b412720657bcb83120463786b01bac009665eb23a4f29b20dfa8ec0f4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485980891727598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053,},Annotations:map[string]string{io.kubernetes.container.hash: 3c93fd02,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d018d0e3549b92f69f6c16285fef5ad40b0301a4917d4564ab027783f2d12af,PodSandboxId:bdad0839eacb0202c51426fa255c1d61b950ad56886f24a4a9d06227237ccf00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710485980596895476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgr42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7
d6bac1-8827-4f81-a2bd-df51ca112359,},Annotations:map[string]string{io.kubernetes.container.hash: 6d0f186f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e122a9acddb1192a4e2b1fe5d2f667dac488cd531afe1dbab757b3425049901,PodSandboxId:4855fdef6cf28d0aaee19e46a56da26d7fb32095490bf11b614d5754c4a6fecb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710485975636692092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b49dbbd
3e45bcdb35ade95885ce199,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a4a1b70a34d046ce9a886f15a5af51fe494a5579f2b971d31cb8cf0b1ba4d0,PodSandboxId:714f36871e61596d705802b0f62f5bd3d0d5a27b68669bec4b9c40617e455b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710485975576715700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c1ecee6a2e4d10fef4c33519353b2b56,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3030153d5b1fe071685f1d8bd9c5b580b403e3fbd3b83075aa18a9335c8cdf93,PodSandboxId:c83c7c7dbdfe287ea4d03282b8d13c9446e8eb741f36478c9bbf1010b766ba5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710485975568453303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f5aed14e66b8a09346f0f00cf663f,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5fed753d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b9c0a06e67bced38aaceefdbb2be8a897d80c29f46236b9c4b3778009727d5,PodSandboxId:619f49d281acdf14c93eb01715b4af5889aa83513a03cf0c5b5687182da7e06a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710485975536853365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c364e6d93eb9ad132a0564c66f4f73a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb6f1fbb-e6f4-4644-b2cb-61c17c7e35a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.347268919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=792deea7-af56-4d5d-837d-1d99031ed642 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.347362551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=792deea7-af56-4d5d-837d-1d99031ed642 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.349226572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71b4d2fb-4f65-499c-990e-082a8d9dca18 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.349662768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485995349638595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71b4d2fb-4f65-499c-990e-082a8d9dca18 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.350350385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15c9dff9-d49e-4c85-a9ee-4d84d9e9afef name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.350401380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15c9dff9-d49e-4c85-a9ee-4d84d9e9afef name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.350559544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c58e7cde872f44aa35494d8b3046d9f3eb57a5e07932a0e3fa3a09f6ba4d877b,PodSandboxId:546b134832b8ff14639e1201ace20e447b4b865d6cf4d3535ab8d61e28552c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710485987890238539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vp6n7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5386a0-d8aa-47d9-b61b-c691f0fcccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d45fa43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb13f59cb7a18064c4107c1a1a3d27db73204b11c95541c02307dd4a2e2b0be,PodSandboxId:463367b412720657bcb83120463786b01bac009665eb23a4f29b20dfa8ec0f4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485980891727598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053,},Annotations:map[string]string{io.kubernetes.container.hash: 3c93fd02,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d018d0e3549b92f69f6c16285fef5ad40b0301a4917d4564ab027783f2d12af,PodSandboxId:bdad0839eacb0202c51426fa255c1d61b950ad56886f24a4a9d06227237ccf00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710485980596895476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgr42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7
d6bac1-8827-4f81-a2bd-df51ca112359,},Annotations:map[string]string{io.kubernetes.container.hash: 6d0f186f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e122a9acddb1192a4e2b1fe5d2f667dac488cd531afe1dbab757b3425049901,PodSandboxId:4855fdef6cf28d0aaee19e46a56da26d7fb32095490bf11b614d5754c4a6fecb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710485975636692092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b49dbbd
3e45bcdb35ade95885ce199,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a4a1b70a34d046ce9a886f15a5af51fe494a5579f2b971d31cb8cf0b1ba4d0,PodSandboxId:714f36871e61596d705802b0f62f5bd3d0d5a27b68669bec4b9c40617e455b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710485975576715700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c1ecee6a2e4d10fef4c33519353b2b56,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3030153d5b1fe071685f1d8bd9c5b580b403e3fbd3b83075aa18a9335c8cdf93,PodSandboxId:c83c7c7dbdfe287ea4d03282b8d13c9446e8eb741f36478c9bbf1010b766ba5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710485975568453303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f5aed14e66b8a09346f0f00cf663f,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5fed753d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b9c0a06e67bced38aaceefdbb2be8a897d80c29f46236b9c4b3778009727d5,PodSandboxId:619f49d281acdf14c93eb01715b4af5889aa83513a03cf0c5b5687182da7e06a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710485975536853365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c364e6d93eb9ad132a0564c66f4f73a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15c9dff9-d49e-4c85-a9ee-4d84d9e9afef name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.385380418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f05ff625-480d-4e38-a231-aa9190bdba87 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.385470373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f05ff625-480d-4e38-a231-aa9190bdba87 name=/runtime.v1.RuntimeService/Version
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.386459843Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ba68089-a29d-4069-9312-1970ec51b2ef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.386958145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710485995386932404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ba68089-a29d-4069-9312-1970ec51b2ef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.387669468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bcba249-513f-4d0f-b719-4e9cfe1b7b15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.387748454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bcba249-513f-4d0f-b719-4e9cfe1b7b15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 06:59:55 test-preload-764289 crio[671]: time="2024-03-15 06:59:55.387964047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c58e7cde872f44aa35494d8b3046d9f3eb57a5e07932a0e3fa3a09f6ba4d877b,PodSandboxId:546b134832b8ff14639e1201ace20e447b4b865d6cf4d3535ab8d61e28552c38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710485987890238539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vp6n7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5386a0-d8aa-47d9-b61b-c691f0fcccbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d45fa43,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb13f59cb7a18064c4107c1a1a3d27db73204b11c95541c02307dd4a2e2b0be,PodSandboxId:463367b412720657bcb83120463786b01bac009665eb23a4f29b20dfa8ec0f4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710485980891727598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053,},Annotations:map[string]string{io.kubernetes.container.hash: 3c93fd02,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d018d0e3549b92f69f6c16285fef5ad40b0301a4917d4564ab027783f2d12af,PodSandboxId:bdad0839eacb0202c51426fa255c1d61b950ad56886f24a4a9d06227237ccf00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710485980596895476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgr42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7
d6bac1-8827-4f81-a2bd-df51ca112359,},Annotations:map[string]string{io.kubernetes.container.hash: 6d0f186f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e122a9acddb1192a4e2b1fe5d2f667dac488cd531afe1dbab757b3425049901,PodSandboxId:4855fdef6cf28d0aaee19e46a56da26d7fb32095490bf11b614d5754c4a6fecb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710485975636692092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b49dbbd
3e45bcdb35ade95885ce199,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a4a1b70a34d046ce9a886f15a5af51fe494a5579f2b971d31cb8cf0b1ba4d0,PodSandboxId:714f36871e61596d705802b0f62f5bd3d0d5a27b68669bec4b9c40617e455b24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710485975576715700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c1ecee6a2e4d10fef4c33519353b2b56,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3030153d5b1fe071685f1d8bd9c5b580b403e3fbd3b83075aa18a9335c8cdf93,PodSandboxId:c83c7c7dbdfe287ea4d03282b8d13c9446e8eb741f36478c9bbf1010b766ba5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710485975568453303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f5aed14e66b8a09346f0f00cf663f,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5fed753d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30b9c0a06e67bced38aaceefdbb2be8a897d80c29f46236b9c4b3778009727d5,PodSandboxId:619f49d281acdf14c93eb01715b4af5889aa83513a03cf0c5b5687182da7e06a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710485975536853365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-764289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c364e6d93eb9ad132a0564c66f4f73a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 97d1cd07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bcba249-513f-4d0f-b719-4e9cfe1b7b15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c58e7cde872f4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   546b134832b8f       coredns-6d4b75cb6d-vp6n7
	beb13f59cb7a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   463367b412720       storage-provisioner
	5d018d0e3549b       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   bdad0839eacb0       kube-proxy-kgr42
	9e122a9acddb1       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   4855fdef6cf28       kube-scheduler-test-preload-764289
	74a4a1b70a34d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   714f36871e615       kube-controller-manager-test-preload-764289
	3030153d5b1fe       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   c83c7c7dbdfe2       etcd-test-preload-764289
	30b9c0a06e67b       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   619f49d281acd       kube-apiserver-test-preload-764289
	
	
	==> coredns [c58e7cde872f44aa35494d8b3046d9f3eb57a5e07932a0e3fa3a09f6ba4d877b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:54616 - 50126 "HINFO IN 7308049051088132731.894848754470830476. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01037515s
	
	
	==> describe nodes <==
	Name:               test-preload-764289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-764289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=test-preload-764289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T06_58_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 06:58:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-764289
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 06:59:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 06:59:49 +0000   Fri, 15 Mar 2024 06:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 06:59:49 +0000   Fri, 15 Mar 2024 06:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 06:59:49 +0000   Fri, 15 Mar 2024 06:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 06:59:49 +0000   Fri, 15 Mar 2024 06:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    test-preload-764289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 018e8f748d194ee1af51e6eb2834790d
	  System UUID:                018e8f74-8d19-4ee1-af51-e6eb2834790d
	  Boot ID:                    ec870d51-3186-45f1-abea-7e07ca04a98e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vp6n7                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 etcd-test-preload-764289                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-test-preload-764289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-test-preload-764289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-kgr42                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-test-preload-764289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                kubelet          Node test-preload-764289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                kubelet          Node test-preload-764289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                kubelet          Node test-preload-764289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                kubelet          Node test-preload-764289 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node test-preload-764289 event: Registered Node test-preload-764289 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node test-preload-764289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node test-preload-764289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node test-preload-764289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-764289 event: Registered Node test-preload-764289 in Controller
	
	
	==> dmesg <==
	[Mar15 06:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.572090] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.735219] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.629202] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.830092] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057404] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068907] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.173523] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.150776] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.228039] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +12.986382] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +0.064209] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.614759] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +5.903606] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.212365] systemd-fstab-generator[1682]: Ignoring "noauto" option for root device
	[  +5.020229] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [3030153d5b1fe071685f1d8bd9c5b580b403e3fbd3b83075aa18a9335c8cdf93] <==
	{"level":"info","ts":"2024-03-15T06:59:35.966Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1bfd5d64eb00b2d5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-15T06:59:35.987Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-15T06:59:35.987Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T06:59:35.990Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1bfd5d64eb00b2d5","initial-advertise-peer-urls":["https://192.168.39.186:2380"],"listen-peer-urls":["https://192.168.39.186:2380"],"advertise-client-urls":["https://192.168.39.186:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.186:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T06:59:35.990Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T06:59:35.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 switched to configuration voters=(2016870896152654549)"}
	{"level":"info","ts":"2024-03-15T06:59:35.990Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","added-peer-id":"1bfd5d64eb00b2d5","added-peer-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-03-15T06:59:35.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d06a36b1777ee5c","local-member-id":"1bfd5d64eb00b2d5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:59:35.992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T06:59:35.988Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-03-15T06:59:36.001Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.186:2380"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgPreVoteResp from 1bfd5d64eb00b2d5 at term 2"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 received MsgVoteResp from 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bfd5d64eb00b2d5 became leader at term 3"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bfd5d64eb00b2d5 elected leader 1bfd5d64eb00b2d5 at term 3"}
	{"level":"info","ts":"2024-03-15T06:59:37.015Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1bfd5d64eb00b2d5","local-member-attributes":"{Name:test-preload-764289 ClientURLs:[https://192.168.39.186:2379]}","request-path":"/0/members/1bfd5d64eb00b2d5/attributes","cluster-id":"7d06a36b1777ee5c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T06:59:37.016Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:59:37.017Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T06:59:37.017Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T06:59:37.018Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.186:2379"}
	{"level":"info","ts":"2024-03-15T06:59:37.019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T06:59:37.019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 06:59:55 up 0 min,  0 users,  load average: 0.79, 0.20, 0.07
	Linux test-preload-764289 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30b9c0a06e67bced38aaceefdbb2be8a897d80c29f46236b9c4b3778009727d5] <==
	I0315 06:59:39.513829       1 establishing_controller.go:76] Starting EstablishingController
	I0315 06:59:39.513896       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0315 06:59:39.513933       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0315 06:59:39.513945       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 06:59:39.514514       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0315 06:59:39.515127       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0315 06:59:39.598767       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0315 06:59:39.609781       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0315 06:59:39.615247       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0315 06:59:39.666123       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0315 06:59:39.668094       1 cache.go:39] Caches are synced for autoregister controller
	I0315 06:59:39.676693       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 06:59:39.677142       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0315 06:59:39.678882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 06:59:39.692161       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 06:59:40.126008       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0315 06:59:40.489421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 06:59:41.066356       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0315 06:59:41.540944       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0315 06:59:41.560852       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0315 06:59:41.610148       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0315 06:59:41.635371       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 06:59:41.657981       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 06:59:51.992848       1 controller.go:611] quota admission added evaluator for: endpoints
	I0315 06:59:52.027399       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [74a4a1b70a34d046ce9a886f15a5af51fe494a5579f2b971d31cb8cf0b1ba4d0] <==
	I0315 06:59:51.997875       1 shared_informer.go:262] Caches are synced for TTL
	I0315 06:59:51.997949       1 shared_informer.go:262] Caches are synced for daemon sets
	I0315 06:59:51.999393       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0315 06:59:52.002660       1 shared_informer.go:262] Caches are synced for cronjob
	I0315 06:59:52.002716       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0315 06:59:52.004193       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0315 06:59:52.004823       1 shared_informer.go:262] Caches are synced for PV protection
	I0315 06:59:52.004905       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0315 06:59:52.005521       1 shared_informer.go:262] Caches are synced for expand
	I0315 06:59:52.007995       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0315 06:59:52.008488       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0315 06:59:52.009787       1 shared_informer.go:262] Caches are synced for crt configmap
	I0315 06:59:52.010297       1 shared_informer.go:262] Caches are synced for service account
	I0315 06:59:52.015499       1 shared_informer.go:262] Caches are synced for ephemeral
	I0315 06:59:52.017557       1 shared_informer.go:262] Caches are synced for disruption
	I0315 06:59:52.017589       1 disruption.go:371] Sending events to api server.
	I0315 06:59:52.017692       1 shared_informer.go:262] Caches are synced for job
	I0315 06:59:52.020860       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0315 06:59:52.020921       1 shared_informer.go:262] Caches are synced for PVC protection
	I0315 06:59:52.024652       1 shared_informer.go:262] Caches are synced for GC
	I0315 06:59:52.223968       1 shared_informer.go:262] Caches are synced for resource quota
	I0315 06:59:52.254653       1 shared_informer.go:262] Caches are synced for resource quota
	I0315 06:59:52.664986       1 shared_informer.go:262] Caches are synced for garbage collector
	I0315 06:59:52.665028       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0315 06:59:52.681877       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [5d018d0e3549b92f69f6c16285fef5ad40b0301a4917d4564ab027783f2d12af] <==
	I0315 06:59:41.002193       1 node.go:163] Successfully retrieved node IP: 192.168.39.186
	I0315 06:59:41.002379       1 server_others.go:138] "Detected node IP" address="192.168.39.186"
	I0315 06:59:41.002405       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0315 06:59:41.058650       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0315 06:59:41.058685       1 server_others.go:206] "Using iptables Proxier"
	I0315 06:59:41.059128       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0315 06:59:41.060316       1 server.go:661] "Version info" version="v1.24.4"
	I0315 06:59:41.060350       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:59:41.061568       1 config.go:317] "Starting service config controller"
	I0315 06:59:41.061825       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0315 06:59:41.061889       1 config.go:226] "Starting endpoint slice config controller"
	I0315 06:59:41.061905       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0315 06:59:41.063646       1 config.go:444] "Starting node config controller"
	I0315 06:59:41.068447       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0315 06:59:41.162082       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0315 06:59:41.162285       1 shared_informer.go:262] Caches are synced for service config
	I0315 06:59:41.173802       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [9e122a9acddb1192a4e2b1fe5d2f667dac488cd531afe1dbab757b3425049901] <==
	I0315 06:59:36.564436       1 serving.go:348] Generated self-signed cert in-memory
	W0315 06:59:39.527993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 06:59:39.528292       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 06:59:39.528406       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 06:59:39.528435       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 06:59:39.606175       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0315 06:59:39.606406       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 06:59:39.609608       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0315 06:59:39.610218       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 06:59:39.610432       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 06:59:39.610521       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 06:59:39.710959       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.826137    1063 topology_manager.go:200] "Topology Admit Handler"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.826246    1063 topology_manager.go:200] "Topology Admit Handler"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.826287    1063 topology_manager.go:200] "Topology Admit Handler"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: E0315 06:59:39.828019    1063 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vp6n7" podUID=6b5386a0-d8aa-47d9-b61b-c691f0fcccbc
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: E0315 06:59:39.868654    1063 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885493    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053-tmp\") pod \"storage-provisioner\" (UID: \"0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053\") " pod="kube-system/storage-provisioner"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885568    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklqq\" (UniqueName: \"kubernetes.io/projected/0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053-kube-api-access-xklqq\") pod \"storage-provisioner\" (UID: \"0c27d81e-2a1b-4d2c-bae9-fbf2bdefa053\") " pod="kube-system/storage-provisioner"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885588    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4t4\" (UniqueName: \"kubernetes.io/projected/f7d6bac1-8827-4f81-a2bd-df51ca112359-kube-api-access-ng4t4\") pod \"kube-proxy-kgr42\" (UID: \"f7d6bac1-8827-4f81-a2bd-df51ca112359\") " pod="kube-system/kube-proxy-kgr42"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885755    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume\") pod \"coredns-6d4b75cb6d-vp6n7\" (UID: \"6b5386a0-d8aa-47d9-b61b-c691f0fcccbc\") " pod="kube-system/coredns-6d4b75cb6d-vp6n7"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885858    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2zq\" (UniqueName: \"kubernetes.io/projected/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-kube-api-access-rz2zq\") pod \"coredns-6d4b75cb6d-vp6n7\" (UID: \"6b5386a0-d8aa-47d9-b61b-c691f0fcccbc\") " pod="kube-system/coredns-6d4b75cb6d-vp6n7"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885888    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7d6bac1-8827-4f81-a2bd-df51ca112359-kube-proxy\") pod \"kube-proxy-kgr42\" (UID: \"f7d6bac1-8827-4f81-a2bd-df51ca112359\") " pod="kube-system/kube-proxy-kgr42"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885920    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7d6bac1-8827-4f81-a2bd-df51ca112359-xtables-lock\") pod \"kube-proxy-kgr42\" (UID: \"f7d6bac1-8827-4f81-a2bd-df51ca112359\") " pod="kube-system/kube-proxy-kgr42"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885951    1063 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7d6bac1-8827-4f81-a2bd-df51ca112359-lib-modules\") pod \"kube-proxy-kgr42\" (UID: \"f7d6bac1-8827-4f81-a2bd-df51ca112359\") " pod="kube-system/kube-proxy-kgr42"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: I0315 06:59:39.885977    1063 reconciler.go:159] "Reconciler: start to sync state"
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: E0315 06:59:39.994846    1063 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 15 06:59:39 test-preload-764289 kubelet[1063]: E0315 06:59:39.994960    1063 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume podName:6b5386a0-d8aa-47d9-b61b-c691f0fcccbc nodeName:}" failed. No retries permitted until 2024-03-15 06:59:40.494896534 +0000 UTC m=+5.807114346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume") pod "coredns-6d4b75cb6d-vp6n7" (UID: "6b5386a0-d8aa-47d9-b61b-c691f0fcccbc") : object "kube-system"/"coredns" not registered
	Mar 15 06:59:40 test-preload-764289 kubelet[1063]: E0315 06:59:40.499358    1063 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 15 06:59:40 test-preload-764289 kubelet[1063]: E0315 06:59:40.499477    1063 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume podName:6b5386a0-d8aa-47d9-b61b-c691f0fcccbc nodeName:}" failed. No retries permitted until 2024-03-15 06:59:41.499429346 +0000 UTC m=+6.811647141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume") pod "coredns-6d4b75cb6d-vp6n7" (UID: "6b5386a0-d8aa-47d9-b61b-c691f0fcccbc") : object "kube-system"/"coredns" not registered
	Mar 15 06:59:40 test-preload-764289 kubelet[1063]: I0315 06:59:40.936965    1063 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e3ccfd7e-47ac-4548-a10c-ed6429b70af1 path="/var/lib/kubelet/pods/e3ccfd7e-47ac-4548-a10c-ed6429b70af1/volumes"
	Mar 15 06:59:41 test-preload-764289 kubelet[1063]: E0315 06:59:41.508019    1063 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 15 06:59:41 test-preload-764289 kubelet[1063]: E0315 06:59:41.508167    1063 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume podName:6b5386a0-d8aa-47d9-b61b-c691f0fcccbc nodeName:}" failed. No retries permitted until 2024-03-15 06:59:43.508150661 +0000 UTC m=+8.820368453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume") pod "coredns-6d4b75cb6d-vp6n7" (UID: "6b5386a0-d8aa-47d9-b61b-c691f0fcccbc") : object "kube-system"/"coredns" not registered
	Mar 15 06:59:41 test-preload-764289 kubelet[1063]: E0315 06:59:41.922840    1063 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vp6n7" podUID=6b5386a0-d8aa-47d9-b61b-c691f0fcccbc
	Mar 15 06:59:43 test-preload-764289 kubelet[1063]: E0315 06:59:43.525495    1063 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 15 06:59:43 test-preload-764289 kubelet[1063]: E0315 06:59:43.525629    1063 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume podName:6b5386a0-d8aa-47d9-b61b-c691f0fcccbc nodeName:}" failed. No retries permitted until 2024-03-15 06:59:47.52561085 +0000 UTC m=+12.837828642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6b5386a0-d8aa-47d9-b61b-c691f0fcccbc-config-volume") pod "coredns-6d4b75cb6d-vp6n7" (UID: "6b5386a0-d8aa-47d9-b61b-c691f0fcccbc") : object "kube-system"/"coredns" not registered
	Mar 15 06:59:43 test-preload-764289 kubelet[1063]: E0315 06:59:43.921965    1063 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vp6n7" podUID=6b5386a0-d8aa-47d9-b61b-c691f0fcccbc
	
	
	==> storage-provisioner [beb13f59cb7a18064c4107c1a1a3d27db73204b11c95541c02307dd4a2e2b0be] <==
	I0315 06:59:41.028862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-764289 -n test-preload-764289
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-764289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-764289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-764289
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-764289: (1.132466186s)
--- FAIL: TestPreload (244.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (396.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.015754559s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-294072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-294072" primary control-plane node in "kubernetes-upgrade-294072" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:01:54.333557   46705 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:01:54.333678   46705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:54.333688   46705 out.go:304] Setting ErrFile to fd 2...
	I0315 07:01:54.333692   46705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:54.333939   46705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:01:54.334509   46705 out.go:298] Setting JSON to false
	I0315 07:01:54.335435   46705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6211,"bootTime":1710479904,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:01:54.335494   46705 start.go:139] virtualization: kvm guest
	I0315 07:01:54.337618   46705 out.go:177] * [kubernetes-upgrade-294072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:01:54.341327   46705 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:01:54.339903   46705 notify.go:220] Checking for updates...
	I0315 07:01:54.344153   46705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:01:54.346320   46705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:01:54.348554   46705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:01:54.349830   46705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:01:54.351038   46705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:01:54.352523   46705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:01:54.391509   46705 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:01:54.393249   46705 start.go:297] selected driver: kvm2
	I0315 07:01:54.393267   46705 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:01:54.393277   46705 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:01:54.394274   46705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:01:54.394346   46705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:01:54.410025   46705 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:01:54.410083   46705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:01:54.410307   46705 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 07:01:54.410373   46705 cni.go:84] Creating CNI manager for ""
	I0315 07:01:54.410391   46705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:01:54.410399   46705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:01:54.410459   46705 start.go:340] cluster config:
	{Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:01:54.410598   46705 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:01:54.412353   46705 out.go:177] * Starting "kubernetes-upgrade-294072" primary control-plane node in "kubernetes-upgrade-294072" cluster
	I0315 07:01:54.413575   46705 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:01:54.413615   46705 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 07:01:54.413621   46705 cache.go:56] Caching tarball of preloaded images
	I0315 07:01:54.413714   46705 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:01:54.413726   46705 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 07:01:54.414010   46705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/config.json ...
	I0315 07:01:54.414031   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/config.json: {Name:mkaac0d2ac1715e7d5293c7c91c423c9c71583d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:54.414236   46705 start.go:360] acquireMachinesLock for kubernetes-upgrade-294072: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:01:54.414270   46705 start.go:364] duration metric: took 16.572µs to acquireMachinesLock for "kubernetes-upgrade-294072"
	I0315 07:01:54.414286   46705 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:01:54.414408   46705 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:01:54.416832   46705 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 07:01:54.417041   46705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:01:54.417100   46705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:01:54.433301   46705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0315 07:01:54.433707   46705 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:01:54.434224   46705 main.go:141] libmachine: Using API Version  1
	I0315 07:01:54.434245   46705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:01:54.434535   46705 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:01:54.434767   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:01:54.434916   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:01:54.435068   46705 start.go:159] libmachine.API.Create for "kubernetes-upgrade-294072" (driver="kvm2")
	I0315 07:01:54.435106   46705 client.go:168] LocalClient.Create starting
	I0315 07:01:54.435141   46705 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:01:54.435179   46705 main.go:141] libmachine: Decoding PEM data...
	I0315 07:01:54.435197   46705 main.go:141] libmachine: Parsing certificate...
	I0315 07:01:54.435270   46705 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:01:54.435296   46705 main.go:141] libmachine: Decoding PEM data...
	I0315 07:01:54.435313   46705 main.go:141] libmachine: Parsing certificate...
	I0315 07:01:54.435336   46705 main.go:141] libmachine: Running pre-create checks...
	I0315 07:01:54.435355   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .PreCreateCheck
	I0315 07:01:54.435893   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetConfigRaw
	I0315 07:01:54.436362   46705 main.go:141] libmachine: Creating machine...
	I0315 07:01:54.436381   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .Create
	I0315 07:01:54.436560   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Creating KVM machine...
	I0315 07:01:54.437689   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found existing default KVM network
	I0315 07:01:54.438315   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:54.438190   46785 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0315 07:01:54.438336   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | created network xml: 
	I0315 07:01:54.438348   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | <network>
	I0315 07:01:54.438354   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   <name>mk-kubernetes-upgrade-294072</name>
	I0315 07:01:54.438365   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   <dns enable='no'/>
	I0315 07:01:54.438370   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   
	I0315 07:01:54.438384   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 07:01:54.438392   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |     <dhcp>
	I0315 07:01:54.438405   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 07:01:54.438420   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |     </dhcp>
	I0315 07:01:54.438431   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   </ip>
	I0315 07:01:54.438435   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG |   
	I0315 07:01:54.438444   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | </network>
	I0315 07:01:54.438450   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | 
	I0315 07:01:54.444130   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | trying to create private KVM network mk-kubernetes-upgrade-294072 192.168.39.0/24...
	I0315 07:01:54.514441   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | private KVM network mk-kubernetes-upgrade-294072 192.168.39.0/24 created
	I0315 07:01:54.514472   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072 ...
	I0315 07:01:54.514496   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:54.514419   46785 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:01:54.514515   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:01:54.514603   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:01:54.752925   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:54.752816   46785 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa...
	I0315 07:01:54.911586   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:54.911490   46785 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/kubernetes-upgrade-294072.rawdisk...
	I0315 07:01:54.911611   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Writing magic tar header
	I0315 07:01:54.911624   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Writing SSH key tar header
	I0315 07:01:54.911632   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:54.911604   46785 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072 ...
	I0315 07:01:54.911725   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072
	I0315 07:01:54.911758   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:01:54.911770   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:01:54.911784   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:01:54.911803   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:01:54.911820   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072 (perms=drwx------)
	I0315 07:01:54.911838   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:01:54.911852   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:01:54.911862   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:01:54.911879   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:01:54.911889   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Checking permissions on dir: /home
	I0315 07:01:54.911900   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:01:54.911913   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:01:54.911921   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Creating domain...
	I0315 07:01:54.911940   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Skipping /home - not owner
	I0315 07:01:54.912945   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) define libvirt domain using xml: 
	I0315 07:01:54.912970   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) <domain type='kvm'>
	I0315 07:01:54.912983   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <name>kubernetes-upgrade-294072</name>
	I0315 07:01:54.912995   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <memory unit='MiB'>2200</memory>
	I0315 07:01:54.913020   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <vcpu>2</vcpu>
	I0315 07:01:54.913040   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <features>
	I0315 07:01:54.913052   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <acpi/>
	I0315 07:01:54.913059   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <apic/>
	I0315 07:01:54.913077   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <pae/>
	I0315 07:01:54.913092   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     
	I0315 07:01:54.913101   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   </features>
	I0315 07:01:54.913111   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <cpu mode='host-passthrough'>
	I0315 07:01:54.913119   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   
	I0315 07:01:54.913141   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   </cpu>
	I0315 07:01:54.913153   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <os>
	I0315 07:01:54.913160   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <type>hvm</type>
	I0315 07:01:54.913172   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <boot dev='cdrom'/>
	I0315 07:01:54.913178   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <boot dev='hd'/>
	I0315 07:01:54.913190   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <bootmenu enable='no'/>
	I0315 07:01:54.913201   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   </os>
	I0315 07:01:54.913222   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   <devices>
	I0315 07:01:54.913238   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <disk type='file' device='cdrom'>
	I0315 07:01:54.913255   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/boot2docker.iso'/>
	I0315 07:01:54.913269   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <target dev='hdc' bus='scsi'/>
	I0315 07:01:54.913280   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <readonly/>
	I0315 07:01:54.913291   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </disk>
	I0315 07:01:54.913305   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <disk type='file' device='disk'>
	I0315 07:01:54.913321   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:01:54.913338   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/kubernetes-upgrade-294072.rawdisk'/>
	I0315 07:01:54.913354   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <target dev='hda' bus='virtio'/>
	I0315 07:01:54.913364   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </disk>
	I0315 07:01:54.913376   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <interface type='network'>
	I0315 07:01:54.913391   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <source network='mk-kubernetes-upgrade-294072'/>
	I0315 07:01:54.913402   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <model type='virtio'/>
	I0315 07:01:54.913410   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </interface>
	I0315 07:01:54.913422   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <interface type='network'>
	I0315 07:01:54.913436   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <source network='default'/>
	I0315 07:01:54.913450   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <model type='virtio'/>
	I0315 07:01:54.913461   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </interface>
	I0315 07:01:54.913473   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <serial type='pty'>
	I0315 07:01:54.913485   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <target port='0'/>
	I0315 07:01:54.913508   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </serial>
	I0315 07:01:54.913532   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <console type='pty'>
	I0315 07:01:54.913559   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <target type='serial' port='0'/>
	I0315 07:01:54.913569   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </console>
	I0315 07:01:54.913580   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     <rng model='virtio'>
	I0315 07:01:54.913592   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)       <backend model='random'>/dev/random</backend>
	I0315 07:01:54.913602   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     </rng>
	I0315 07:01:54.913610   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     
	I0315 07:01:54.913620   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)     
	I0315 07:01:54.913654   46705 main.go:141] libmachine: (kubernetes-upgrade-294072)   </devices>
	I0315 07:01:54.913688   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) </domain>
	I0315 07:01:54.913705   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) 
	I0315 07:01:54.917687   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:ff:57:cf in network default
	I0315 07:01:54.918278   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Ensuring networks are active...
	I0315 07:01:54.918299   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:54.918927   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Ensuring network default is active
	I0315 07:01:54.919188   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Ensuring network mk-kubernetes-upgrade-294072 is active
	I0315 07:01:54.919677   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Getting domain xml...
	I0315 07:01:54.920377   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Creating domain...
	I0315 07:01:56.236162   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Waiting to get IP...
	I0315 07:01:56.236996   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.237346   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.237381   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:56.237327   46785 retry.go:31] will retry after 241.734246ms: waiting for machine to come up
	I0315 07:01:56.480879   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.481355   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.481379   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:56.481296   46785 retry.go:31] will retry after 289.767472ms: waiting for machine to come up
	I0315 07:01:56.772850   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.773306   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:56.773329   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:56.773226   46785 retry.go:31] will retry after 435.270918ms: waiting for machine to come up
	I0315 07:01:57.210629   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:57.211056   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:57.211080   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:57.211023   46785 retry.go:31] will retry after 384.452634ms: waiting for machine to come up
	I0315 07:01:57.596661   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:57.597118   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:57.597146   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:57.597086   46785 retry.go:31] will retry after 616.018706ms: waiting for machine to come up
	I0315 07:01:58.216171   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:58.216612   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:58.216639   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:58.216571   46785 retry.go:31] will retry after 936.899094ms: waiting for machine to come up
	I0315 07:01:59.154483   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:01:59.154910   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:01:59.154938   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:01:59.154869   46785 retry.go:31] will retry after 845.669063ms: waiting for machine to come up
	I0315 07:02:00.002619   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:00.003132   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:00.003160   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:00.003084   46785 retry.go:31] will retry after 1.454556068s: waiting for machine to come up
	I0315 07:02:01.459628   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:01.460010   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:01.460036   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:01.459972   46785 retry.go:31] will retry after 1.348404322s: waiting for machine to come up
	I0315 07:02:02.809723   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:02.810107   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:02.810135   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:02.810066   46785 retry.go:31] will retry after 1.418935328s: waiting for machine to come up
	I0315 07:02:04.230784   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:04.231219   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:04.231239   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:04.231179   46785 retry.go:31] will retry after 1.849850322s: waiting for machine to come up
	I0315 07:02:06.082726   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:06.083157   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:06.083188   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:06.083086   46785 retry.go:31] will retry after 2.921005566s: waiting for machine to come up
	I0315 07:02:09.005561   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:09.005879   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:09.005906   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:09.005851   46785 retry.go:31] will retry after 4.210591141s: waiting for machine to come up
	I0315 07:02:13.218727   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:13.219221   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find current IP address of domain kubernetes-upgrade-294072 in network mk-kubernetes-upgrade-294072
	I0315 07:02:13.219244   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | I0315 07:02:13.219178   46785 retry.go:31] will retry after 4.632769128s: waiting for machine to come up
	I0315 07:02:17.855757   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:17.856212   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Found IP for machine: 192.168.39.216
	I0315 07:02:17.856225   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Reserving static IP address...
	I0315 07:02:17.856235   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has current primary IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:17.856602   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-294072", mac: "52:54:00:c6:3c:02", ip: "192.168.39.216"} in network mk-kubernetes-upgrade-294072
	I0315 07:02:17.929630   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Getting to WaitForSSH function...
	I0315 07:02:17.929662   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Reserved static IP address: 192.168.39.216
	I0315 07:02:17.929699   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Waiting for SSH to be available...
	I0315 07:02:17.932601   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:17.933078   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:17.933118   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:17.933200   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Using SSH client type: external
	I0315 07:02:17.933225   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa (-rw-------)
	I0315 07:02:17.933258   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:02:17.933274   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | About to run SSH command:
	I0315 07:02:17.933282   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | exit 0
	I0315 07:02:18.052286   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | SSH cmd err, output: <nil>: 
	I0315 07:02:18.052577   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) KVM machine creation complete!
	I0315 07:02:18.052874   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetConfigRaw
	I0315 07:02:18.053385   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:18.053603   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:18.053766   46705 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 07:02:18.053779   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetState
	I0315 07:02:18.054850   46705 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 07:02:18.054865   46705 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 07:02:18.054871   46705 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 07:02:18.054882   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.056949   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.057393   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.057424   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.057583   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.057795   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.057978   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.058116   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.058283   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:18.058466   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:18.058479   46705 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 07:02:18.156112   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:02:18.156139   46705 main.go:141] libmachine: Detecting the provisioner...
	I0315 07:02:18.156149   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.158976   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.159338   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.159370   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.159481   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.159692   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.159858   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.159987   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.160158   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:18.160355   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:18.160368   46705 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 07:02:18.262022   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 07:02:18.262109   46705 main.go:141] libmachine: found compatible host: buildroot
	I0315 07:02:18.262127   46705 main.go:141] libmachine: Provisioning with buildroot...
	I0315 07:02:18.262144   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:02:18.262400   46705 buildroot.go:166] provisioning hostname "kubernetes-upgrade-294072"
	I0315 07:02:18.262449   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:02:18.262634   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.265066   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.265438   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.265468   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.265635   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.265910   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.266068   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.266199   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.266348   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:18.266544   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:18.266558   46705 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-294072 && echo "kubernetes-upgrade-294072" | sudo tee /etc/hostname
	I0315 07:02:18.380350   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-294072
	
	I0315 07:02:18.380388   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.383312   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.383638   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.383672   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.383834   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.384018   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.384180   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.384362   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.384583   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:18.384795   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:18.384814   46705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-294072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-294072/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-294072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:02:18.500262   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:02:18.500293   46705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:02:18.500316   46705 buildroot.go:174] setting up certificates
	I0315 07:02:18.500328   46705 provision.go:84] configureAuth start
	I0315 07:02:18.500340   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:02:18.500637   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:02:18.503274   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.503637   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.503668   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.503789   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.505946   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.506289   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.506334   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.506485   46705 provision.go:143] copyHostCerts
	I0315 07:02:18.506553   46705 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:02:18.506566   46705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:02:18.506645   46705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:02:18.506791   46705 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:02:18.506804   46705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:02:18.506845   46705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:02:18.506939   46705 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:02:18.506948   46705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:02:18.506989   46705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:02:18.507069   46705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-294072 san=[127.0.0.1 192.168.39.216 kubernetes-upgrade-294072 localhost minikube]
	I0315 07:02:18.575481   46705 provision.go:177] copyRemoteCerts
	I0315 07:02:18.575543   46705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:02:18.575576   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.578224   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.578535   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.578568   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.578746   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.578918   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.579075   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.579209   46705 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:02:18.659666   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:02:18.686231   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0315 07:02:18.712659   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:02:18.737484   46705 provision.go:87] duration metric: took 237.143106ms to configureAuth
	I0315 07:02:18.737519   46705 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:02:18.737720   46705 config.go:182] Loaded profile config "kubernetes-upgrade-294072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:02:18.737806   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:18.740372   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.740708   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:18.740732   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:18.740876   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:18.741090   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.741256   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:18.741406   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:18.741599   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:18.741783   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:18.741803   46705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:02:19.007575   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:02:19.007625   46705 main.go:141] libmachine: Checking connection to Docker...
	I0315 07:02:19.007636   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetURL
	I0315 07:02:19.008861   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | Using libvirt version 6000000
	I0315 07:02:19.011009   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.011326   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.011363   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.011488   46705 main.go:141] libmachine: Docker is up and running!
	I0315 07:02:19.011502   46705 main.go:141] libmachine: Reticulating splines...
	I0315 07:02:19.011508   46705 client.go:171] duration metric: took 24.576391749s to LocalClient.Create
	I0315 07:02:19.011529   46705 start.go:167] duration metric: took 24.576463398s to libmachine.API.Create "kubernetes-upgrade-294072"
	I0315 07:02:19.011540   46705 start.go:293] postStartSetup for "kubernetes-upgrade-294072" (driver="kvm2")
	I0315 07:02:19.011558   46705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:02:19.011579   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:19.011805   46705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:02:19.011827   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:19.014096   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.014197   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.014228   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.014396   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:19.014568   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:19.014685   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:19.014801   46705 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:02:19.096097   46705 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:02:19.100634   46705 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:02:19.100657   46705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:02:19.100729   46705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:02:19.100803   46705 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:02:19.100885   46705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:02:19.111331   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:02:19.136359   46705 start.go:296] duration metric: took 124.800551ms for postStartSetup
	I0315 07:02:19.136408   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetConfigRaw
	I0315 07:02:19.136981   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:02:19.139680   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.140043   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.140082   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.140276   46705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/config.json ...
	I0315 07:02:19.140502   46705 start.go:128] duration metric: took 24.726080626s to createHost
	I0315 07:02:19.140528   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:19.142589   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.142913   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.142939   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.143068   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:19.143220   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:19.143365   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:19.143462   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:19.143570   46705 main.go:141] libmachine: Using SSH client type: native
	I0315 07:02:19.143738   46705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:02:19.143755   46705 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 07:02:19.245470   46705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710486139.217327614
	
	I0315 07:02:19.245493   46705 fix.go:216] guest clock: 1710486139.217327614
	I0315 07:02:19.245499   46705 fix.go:229] Guest: 2024-03-15 07:02:19.217327614 +0000 UTC Remote: 2024-03-15 07:02:19.140515258 +0000 UTC m=+24.872258993 (delta=76.812356ms)
	I0315 07:02:19.245517   46705 fix.go:200] guest clock delta is within tolerance: 76.812356ms
	I0315 07:02:19.245522   46705 start.go:83] releasing machines lock for "kubernetes-upgrade-294072", held for 24.831245451s
	I0315 07:02:19.245547   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:19.245804   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:02:19.248541   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.248874   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.248909   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.249081   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:19.249492   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:19.249667   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:02:19.249750   46705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:02:19.249793   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:19.249848   46705 ssh_runner.go:195] Run: cat /version.json
	I0315 07:02:19.249864   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:02:19.252608   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.252719   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.253062   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.253098   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:19.253129   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.253152   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:19.253260   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:19.253363   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:02:19.253467   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:19.253542   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:02:19.253617   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:19.253687   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:02:19.253756   46705 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:02:19.253801   46705 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:02:19.366188   46705 ssh_runner.go:195] Run: systemctl --version
	I0315 07:02:19.373068   46705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:02:19.541989   46705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:02:19.548177   46705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:02:19.548260   46705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:02:19.566530   46705 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:02:19.566556   46705 start.go:494] detecting cgroup driver to use...
	I0315 07:02:19.566617   46705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:02:19.585270   46705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:02:19.600724   46705 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:02:19.600777   46705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:02:19.615409   46705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:02:19.629900   46705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:02:19.749558   46705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:02:19.922434   46705 docker.go:233] disabling docker service ...
	I0315 07:02:19.922501   46705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:02:19.941281   46705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:02:19.957412   46705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:02:20.088793   46705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:02:20.217084   46705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:02:20.236151   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:02:20.258896   46705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:02:20.258950   46705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:02:20.271174   46705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:02:20.271262   46705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:02:20.283859   46705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:02:20.296058   46705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:02:20.308206   46705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:02:20.320763   46705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:02:20.336572   46705 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:02:20.336635   46705 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:02:20.353337   46705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:02:20.365682   46705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:02:20.515876   46705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:02:20.679992   46705 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:02:20.680072   46705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:02:20.685313   46705 start.go:562] Will wait 60s for crictl version
	I0315 07:02:20.685381   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:20.690015   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:02:20.735985   46705 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:02:20.736067   46705 ssh_runner.go:195] Run: crio --version
	I0315 07:02:20.765365   46705 ssh_runner.go:195] Run: crio --version
	I0315 07:02:20.796599   46705 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:02:20.798062   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:02:20.801295   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:20.801787   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:02:09 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:02:20.801810   46705 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:02:20.802159   46705 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:02:20.806980   46705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:02:20.821812   46705 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:02:20.821940   46705 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:02:20.822005   46705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:02:20.857884   46705 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:02:20.857956   46705 ssh_runner.go:195] Run: which lz4
	I0315 07:02:20.862206   46705 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0315 07:02:20.866606   46705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:02:20.866635   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:02:22.800399   46705 crio.go:444] duration metric: took 1.938229566s to copy over tarball
	I0315 07:02:22.800491   46705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:02:25.396133   46705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.595597781s)
	I0315 07:02:25.396170   46705 crio.go:451] duration metric: took 2.595739507s to extract the tarball
	I0315 07:02:25.396180   46705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:02:25.442089   46705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:02:25.489267   46705 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:02:25.489291   46705 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:02:25.489389   46705 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:02:25.489496   46705 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:02:25.489391   46705 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:02:25.489399   46705 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:02:25.489402   46705 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:02:25.489417   46705 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:02:25.489421   46705 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:02:25.489427   46705 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:02:25.491081   46705 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:02:25.491107   46705 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:02:25.491109   46705 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:02:25.491092   46705 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:02:25.491155   46705 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:02:25.491157   46705 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:02:25.491239   46705 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:02:25.491341   46705 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:02:25.747706   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:02:25.759512   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:02:25.773704   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:02:25.773829   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:02:25.777257   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:02:25.807697   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:02:25.838701   46705 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:02:25.838744   46705 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:02:25.838799   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.848852   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:02:25.860214   46705 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:02:25.860247   46705 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:02:25.860282   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.890467   46705 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:02:25.890503   46705 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:02:25.890545   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.932488   46705 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:02:25.932531   46705 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:02:25.932592   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.938257   46705 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:02:25.938311   46705 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:02:25.938348   46705 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:02:25.938385   46705 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:02:25.938413   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:02:25.938358   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.938420   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.963597   46705 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:02:25.963644   46705 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:02:25.963683   46705 ssh_runner.go:195] Run: which crictl
	I0315 07:02:25.963691   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:02:25.963773   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:02:25.963803   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:02:25.963867   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:02:26.010082   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:02:26.010087   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:02:26.104327   46705 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:02:26.104409   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:02:26.104451   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:02:26.104535   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:02:26.104584   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:02:26.116481   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:02:26.148100   46705 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:02:26.354182   46705 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:02:26.499965   46705 cache_images.go:92] duration metric: took 1.01065537s to LoadCachedImages
	W0315 07:02:26.500063   46705 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0315 07:02:26.500080   46705 kubeadm.go:928] updating node { 192.168.39.216 8443 v1.20.0 crio true true} ...
	I0315 07:02:26.500233   46705 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-294072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:02:26.500339   46705 ssh_runner.go:195] Run: crio config
	I0315 07:02:26.554872   46705 cni.go:84] Creating CNI manager for ""
	I0315 07:02:26.554902   46705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:02:26.554918   46705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:02:26.554942   46705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-294072 NodeName:kubernetes-upgrade-294072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:02:26.555131   46705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-294072"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:02:26.555202   46705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:02:26.565875   46705 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:02:26.565943   46705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:02:26.576234   46705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0315 07:02:26.594932   46705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:02:26.612679   46705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0315 07:02:26.630458   46705 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I0315 07:02:26.634708   46705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:02:26.647461   46705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:02:26.779277   46705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:02:26.798626   46705 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072 for IP: 192.168.39.216
	I0315 07:02:26.798651   46705 certs.go:194] generating shared ca certs ...
	I0315 07:02:26.798670   46705 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:26.798837   46705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:02:26.798881   46705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:02:26.798891   46705 certs.go:256] generating profile certs ...
	I0315 07:02:26.798941   46705 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.key
	I0315 07:02:26.798954   46705 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.crt with IP's: []
	I0315 07:02:26.869992   46705 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.crt ...
	I0315 07:02:26.870019   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.crt: {Name:mkcfad57beb5af962d0011f7c00e9a0a153585e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:26.870177   46705 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.key ...
	I0315 07:02:26.870195   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.key: {Name:mkbc4900cfca87d3cda2145e62549b29b33a8fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:26.870310   46705 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key.ed5b6e78
	I0315 07:02:26.870338   46705 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt.ed5b6e78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I0315 07:02:26.933910   46705 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt.ed5b6e78 ...
	I0315 07:02:26.933938   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt.ed5b6e78: {Name:mkb96409cb259322e22a9833a7df745656ca53ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:26.934116   46705 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key.ed5b6e78 ...
	I0315 07:02:26.934133   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key.ed5b6e78: {Name:mk590c49e2a40bb448207569779ca18967244b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:26.934225   46705 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt.ed5b6e78 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt
	I0315 07:02:26.934335   46705 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key.ed5b6e78 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key
	I0315 07:02:26.934415   46705 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key
	I0315 07:02:26.934437   46705 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.crt with IP's: []
	I0315 07:02:27.205785   46705 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.crt ...
	I0315 07:02:27.205816   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.crt: {Name:mkd335f271812fa2eb54e87f51a45aac0ebffb7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:27.206010   46705 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key ...
	I0315 07:02:27.206029   46705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key: {Name:mkcbdb8d7d4965aad96eb690b61b95fd9ce5ebd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:27.206218   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:02:27.206273   46705 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:02:27.206288   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:02:27.206319   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:02:27.206350   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:02:27.206381   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:02:27.206438   46705 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:02:27.207091   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:02:27.238111   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:02:27.266147   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:02:27.293189   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:02:27.320557   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 07:02:27.347245   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:02:27.373772   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:02:27.401547   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:02:27.427888   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:02:27.453832   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:02:27.480893   46705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:02:27.507102   46705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:02:27.525799   46705 ssh_runner.go:195] Run: openssl version
	I0315 07:02:27.532698   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:02:27.544911   46705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:02:27.550307   46705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:02:27.550367   46705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:02:27.556975   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:02:27.569194   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:02:27.581355   46705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:02:27.586578   46705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:02:27.586633   46705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:02:27.593078   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:02:27.611571   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:02:27.628339   46705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:02:27.633709   46705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:02:27.633788   46705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:02:27.641527   46705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:02:27.657576   46705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:02:27.665571   46705 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:02:27.665632   46705 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:02:27.665749   46705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:02:27.665821   46705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:02:27.717586   46705 cri.go:89] found id: ""
	I0315 07:02:27.717658   46705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:02:27.729327   46705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:02:27.740900   46705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:02:27.751859   46705 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:02:27.751880   46705 kubeadm.go:156] found existing configuration files:
	
	I0315 07:02:27.751930   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:02:27.762597   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:02:27.762675   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:02:27.773561   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:02:27.784191   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:02:27.784269   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:02:27.794975   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:02:27.805306   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:02:27.805362   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:02:27.816435   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:02:27.826626   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:02:27.826699   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:02:27.837345   46705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:02:28.108793   46705 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:04:25.390594   46705 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:04:25.390692   46705 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:04:25.392398   46705 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:04:25.392484   46705 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:04:25.392569   46705 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:04:25.392688   46705 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:04:25.392784   46705 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:04:25.392844   46705 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:04:25.394541   46705 out.go:204]   - Generating certificates and keys ...
	I0315 07:04:25.394641   46705 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:04:25.394712   46705 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:04:25.394783   46705 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:04:25.394845   46705 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:04:25.394902   46705 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:04:25.394945   46705 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:04:25.394994   46705 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:04:25.395113   46705 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0315 07:04:25.395164   46705 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:04:25.395296   46705 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0315 07:04:25.395378   46705 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:04:25.395468   46705 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:04:25.395531   46705 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:04:25.395608   46705 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:04:25.395690   46705 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:04:25.395757   46705 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:04:25.395815   46705 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:04:25.395861   46705 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:04:25.395957   46705 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:04:25.396031   46705 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:04:25.396072   46705 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:04:25.396133   46705 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:04:25.398494   46705 out.go:204]   - Booting up control plane ...
	I0315 07:04:25.398588   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:04:25.398654   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:04:25.398721   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:04:25.398789   46705 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:04:25.398937   46705 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:04:25.398990   46705 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:04:25.399048   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:04:25.399226   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:04:25.399300   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:04:25.399481   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:04:25.399541   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:04:25.399713   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:04:25.399776   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:04:25.399929   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:04:25.399986   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:04:25.400165   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:04:25.400179   46705 kubeadm.go:309] 
	I0315 07:04:25.400216   46705 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:04:25.400251   46705 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:04:25.400257   46705 kubeadm.go:309] 
	I0315 07:04:25.400294   46705 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:04:25.400323   46705 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:04:25.400419   46705 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:04:25.400429   46705 kubeadm.go:309] 
	I0315 07:04:25.400540   46705 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:04:25.400578   46705 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:04:25.400609   46705 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:04:25.400615   46705 kubeadm.go:309] 
	I0315 07:04:25.400744   46705 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:04:25.400859   46705 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:04:25.400879   46705 kubeadm.go:309] 
	I0315 07:04:25.401002   46705 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:04:25.401117   46705 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:04:25.401240   46705 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:04:25.401326   46705 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:04:25.401357   46705 kubeadm.go:309] 
	W0315 07:04:25.401483   46705 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-294072 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:04:25.401527   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:04:27.895061   46705 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.493510847s)
	I0315 07:04:27.895145   46705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:04:27.911684   46705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:04:27.922362   46705 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:04:27.922380   46705 kubeadm.go:156] found existing configuration files:
	
	I0315 07:04:27.922420   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:04:27.933664   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:04:27.933721   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:04:27.943811   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:04:27.952954   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:04:27.953012   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:04:27.962786   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:04:27.972128   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:04:27.972199   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:04:27.981864   46705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:04:27.991009   46705 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:04:27.991077   46705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:04:28.000326   46705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:04:28.074411   46705 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:04:28.074469   46705 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:04:28.228870   46705 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:04:28.228964   46705 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:04:28.229046   46705 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:04:28.438412   46705 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:04:28.441667   46705 out.go:204]   - Generating certificates and keys ...
	I0315 07:04:28.441786   46705 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:04:28.441883   46705 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:04:28.441973   46705 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:04:28.442050   46705 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:04:28.442160   46705 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:04:28.442238   46705 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:04:28.442346   46705 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:04:28.442744   46705 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:04:28.443432   46705 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:04:28.444763   46705 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:04:28.444918   46705 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:04:28.445030   46705 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:04:28.596597   46705 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:04:28.732275   46705 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:04:28.979825   46705 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:04:29.319940   46705 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:04:29.338272   46705 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:04:29.339860   46705 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:04:29.339925   46705 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:04:29.511117   46705 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:04:29.513462   46705 out.go:204]   - Booting up control plane ...
	I0315 07:04:29.513614   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:04:29.522556   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:04:29.523184   46705 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:04:29.524509   46705 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:04:29.528641   46705 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:05:09.532918   46705 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:05:09.533328   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:05:09.533628   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:05:14.534105   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:05:14.534396   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:05:24.535176   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:05:24.535469   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:05:44.534629   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:05:44.534831   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:06:24.535193   46705 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:06:24.535480   46705 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:06:24.535500   46705 kubeadm.go:309] 
	I0315 07:06:24.535560   46705 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:06:24.535625   46705 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:06:24.535635   46705 kubeadm.go:309] 
	I0315 07:06:24.535722   46705 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:06:24.535782   46705 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:06:24.535928   46705 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:06:24.535940   46705 kubeadm.go:309] 
	I0315 07:06:24.536104   46705 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:06:24.536159   46705 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:06:24.536203   46705 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:06:24.536213   46705 kubeadm.go:309] 
	I0315 07:06:24.536391   46705 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:06:24.536574   46705 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:06:24.536591   46705 kubeadm.go:309] 
	I0315 07:06:24.536722   46705 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:06:24.536843   46705 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:06:24.536945   46705 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:06:24.537046   46705 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:06:24.537059   46705 kubeadm.go:309] 
	I0315 07:06:24.537739   46705 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:06:24.537856   46705 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:06:24.538052   46705 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:06:24.538052   46705 kubeadm.go:393] duration metric: took 3m56.872421721s to StartCluster
	I0315 07:06:24.538116   46705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:06:24.538175   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:06:24.596598   46705 cri.go:89] found id: ""
	I0315 07:06:24.596629   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.596640   46705 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:06:24.596648   46705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:06:24.596710   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:06:24.638735   46705 cri.go:89] found id: ""
	I0315 07:06:24.638763   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.638773   46705 logs.go:278] No container was found matching "etcd"
	I0315 07:06:24.638781   46705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:06:24.638843   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:06:24.686160   46705 cri.go:89] found id: ""
	I0315 07:06:24.686191   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.686199   46705 logs.go:278] No container was found matching "coredns"
	I0315 07:06:24.686208   46705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:06:24.686266   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:06:24.733982   46705 cri.go:89] found id: ""
	I0315 07:06:24.734007   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.734017   46705 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:06:24.734025   46705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:06:24.734100   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:06:24.781689   46705 cri.go:89] found id: ""
	I0315 07:06:24.781723   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.781733   46705 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:06:24.781740   46705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:06:24.781805   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:06:24.821476   46705 cri.go:89] found id: ""
	I0315 07:06:24.821509   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.821519   46705 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:06:24.821528   46705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:06:24.821607   46705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:06:24.863032   46705 cri.go:89] found id: ""
	I0315 07:06:24.863084   46705 logs.go:276] 0 containers: []
	W0315 07:06:24.863096   46705 logs.go:278] No container was found matching "kindnet"
	I0315 07:06:24.863108   46705 logs.go:123] Gathering logs for container status ...
	I0315 07:06:24.863131   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:06:24.911693   46705 logs.go:123] Gathering logs for kubelet ...
	I0315 07:06:24.911734   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:06:24.984666   46705 logs.go:123] Gathering logs for dmesg ...
	I0315 07:06:24.984702   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:06:25.005094   46705 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:06:25.005123   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:06:25.168073   46705 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:06:25.168100   46705 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:06:25.168127   46705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0315 07:06:25.264869   46705 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:06:25.264924   46705 out.go:239] * 
	* 
	W0315 07:06:25.264975   46705 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:06:25.265001   46705 out.go:239] * 
	* 
	W0315 07:06:25.265836   46705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:06:25.270225   46705 out.go:177] 
	W0315 07:06:25.271773   46705 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:06:25.271833   46705 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:06:25.271868   46705 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:06:25.273533   46705 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-294072
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-294072: (2.321059041s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-294072 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-294072 status --format={{.Host}}: exit status 7 (76.01019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.181760066s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-294072 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.4873ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-294072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-294072
	    minikube start -p kubernetes-upgrade-294072 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2940722 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-294072 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-294072 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.182858114s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-15 07:08:26.239366108 +0000 UTC m=+4351.243074770
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-294072 -n kubernetes-upgrade-294072
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-294072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-294072 logs -n 25: (2.312761075s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-082115                    | pause-082115              | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | --alsologtostderr -v=5             |                           |         |         |                     |                     |
	| unpause | -p pause-082115                    | pause-082115              | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | --alsologtostderr -v=5             |                           |         |         |                     |                     |
	| pause   | -p pause-082115                    | pause-082115              | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | --alsologtostderr -v=5             |                           |         |         |                     |                     |
	| delete  | -p pause-082115                    | pause-082115              | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | --alsologtostderr -v=5             |                           |         |         |                     |                     |
	| delete  | -p pause-082115                    | pause-082115              | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	| start   | -p running-upgrade-522675          | minikube                  | jenkins | v1.26.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-254279 sudo        | NoKubernetes-254279       | jenkins | v1.32.0 | 15 Mar 24 07:05 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-254279             | NoKubernetes-254279       | jenkins | v1.32.0 | 15 Mar 24 07:05 UTC | 15 Mar 24 07:05 UTC |
	| start   | -p force-systemd-flag-613029       | force-systemd-flag-613029 | jenkins | v1.32.0 | 15 Mar 24 07:05 UTC | 15 Mar 24 07:06 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-691560 stop        | minikube                  | jenkins | v1.26.0 | 15 Mar 24 07:05 UTC | 15 Mar 24 07:05 UTC |
	| start   | -p stopped-upgrade-691560          | stopped-upgrade-691560    | jenkins | v1.32.0 | 15 Mar 24 07:05 UTC | 15 Mar 24 07:06 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-522675          | running-upgrade-522675    | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:07 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-294072       | kubernetes-upgrade-294072 | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:06 UTC |
	| start   | -p kubernetes-upgrade-294072       | kubernetes-upgrade-294072 | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:07 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-613029 ssh cat  | force-systemd-flag-613029 | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:06 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-613029       | force-systemd-flag-613029 | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:06 UTC |
	| start   | -p force-systemd-env-397316        | force-systemd-env-397316  | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:07 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-691560          | stopped-upgrade-691560    | jenkins | v1.32.0 | 15 Mar 24 07:06 UTC | 15 Mar 24 07:07 UTC |
	| start   | -p cert-expiration-266938          | cert-expiration-266938    | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC | 15 Mar 24 07:08 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-294072       | kubernetes-upgrade-294072 | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-294072       | kubernetes-upgrade-294072 | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC | 15 Mar 24 07:08 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-397316        | force-systemd-env-397316  | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC | 15 Mar 24 07:07 UTC |
	| start   | -p cert-options-559541             | cert-options-559541       | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-522675          | running-upgrade-522675    | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC | 15 Mar 24 07:07 UTC |
	| start   | -p old-k8s-version-981420          | old-k8s-version-981420    | jenkins | v1.32.0 | 15 Mar 24 07:07 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true      |                           |         |         |                     |                     |
	|         | --kvm-network=default              |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system      |                           |         |         |                     |                     |
	|         | --disable-driver-mounts            |                           |         |         |                     |                     |
	|         | --keep-context=false               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:07:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:07:45.799034   53799 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:07:45.799171   53799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:07:45.799180   53799 out.go:304] Setting ErrFile to fd 2...
	I0315 07:07:45.799184   53799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:07:45.799463   53799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:07:45.800115   53799 out.go:298] Setting JSON to false
	I0315 07:07:45.801245   53799 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6562,"bootTime":1710479904,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:07:45.801311   53799 start.go:139] virtualization: kvm guest
	I0315 07:07:45.803849   53799 out.go:177] * [old-k8s-version-981420] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:07:45.805739   53799 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:07:45.807194   53799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:07:45.805795   53799 notify.go:220] Checking for updates...
	I0315 07:07:45.808705   53799 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:07:45.810076   53799 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:07:45.811356   53799 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:07:45.812724   53799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:07:45.814694   53799 config.go:182] Loaded profile config "cert-expiration-266938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:07:45.814812   53799 config.go:182] Loaded profile config "cert-options-559541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:07:45.814915   53799 config.go:182] Loaded profile config "kubernetes-upgrade-294072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:07:45.815044   53799 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:07:45.859425   53799 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:07:45.861051   53799 start.go:297] selected driver: kvm2
	I0315 07:07:45.861093   53799 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:07:45.861109   53799 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:07:45.861929   53799 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:07:45.862004   53799 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:07:45.880914   53799 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:07:45.880976   53799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:07:45.881280   53799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:07:45.881366   53799 cni.go:84] Creating CNI manager for ""
	I0315 07:07:45.881381   53799 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:07:45.881394   53799 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:07:45.881494   53799 start.go:340] cluster config:
	{Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:07:45.881630   53799 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:07:45.883596   53799 out.go:177] * Starting "old-k8s-version-981420" primary control-plane node in "old-k8s-version-981420" cluster
	I0315 07:07:41.679849   53621 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:07:41.679885   53621 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 07:07:41.679891   53621 cache.go:56] Caching tarball of preloaded images
	I0315 07:07:41.679974   53621 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:07:41.679989   53621 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 07:07:41.680075   53621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-options-559541/config.json ...
	I0315 07:07:41.680087   53621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-options-559541/config.json: {Name:mk242293bd216ee27c34e3d2602d67c5f5b0fdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:41.680207   53621 start.go:360] acquireMachinesLock for cert-options-559541: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:07:43.756280   53245 machine.go:94] provisionDockerMachine start ...
	I0315 07:07:43.756309   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:43.756575   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:43.759416   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:43.760231   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:43.760353   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:43.760380   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:43.761892   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:43.762072   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:43.762250   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:43.762432   53245 main.go:141] libmachine: Using SSH client type: native
	I0315 07:07:43.762619   53245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:07:43.762633   53245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:07:43.882819   53245 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-294072
	
	I0315 07:07:43.882855   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:07:43.883132   53245 buildroot.go:166] provisioning hostname "kubernetes-upgrade-294072"
	I0315 07:07:43.883161   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:07:43.883385   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:43.886623   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:43.887085   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:43.887119   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:43.887298   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:43.887479   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:43.887649   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:43.887814   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:43.887973   53245 main.go:141] libmachine: Using SSH client type: native
	I0315 07:07:43.888167   53245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:07:43.888184   53245 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-294072 && echo "kubernetes-upgrade-294072" | sudo tee /etc/hostname
	I0315 07:07:44.018450   53245 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-294072
	
	I0315 07:07:44.018478   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:44.021462   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.021840   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:44.021869   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.022078   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:44.022257   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:44.022408   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:44.022528   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:44.022719   53245 main.go:141] libmachine: Using SSH client type: native
	I0315 07:07:44.022899   53245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:07:44.022917   53245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-294072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-294072/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-294072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:07:44.138070   53245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:07:44.138102   53245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:07:44.138136   53245 buildroot.go:174] setting up certificates
	I0315 07:07:44.138146   53245 provision.go:84] configureAuth start
	I0315 07:07:44.138155   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetMachineName
	I0315 07:07:44.138432   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:07:44.141640   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.142040   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:44.142076   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.142202   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:44.145008   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.145392   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:44.145427   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.145532   53245 provision.go:143] copyHostCerts
	I0315 07:07:44.145601   53245 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:07:44.145612   53245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:07:44.145677   53245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:07:44.145823   53245 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:07:44.145835   53245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:07:44.145860   53245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:07:44.145925   53245 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:07:44.145932   53245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:07:44.145949   53245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:07:44.146006   53245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-294072 san=[127.0.0.1 192.168.39.216 kubernetes-upgrade-294072 localhost minikube]
	I0315 07:07:44.531325   53245 provision.go:177] copyRemoteCerts
	I0315 07:07:44.531396   53245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:07:44.531429   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:44.534437   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.534898   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:44.534931   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.535061   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:44.535282   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:44.535460   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:44.535610   53245 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:07:44.626191   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:07:44.669168   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0315 07:07:44.710092   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:07:44.751754   53245 provision.go:87] duration metric: took 613.591234ms to configureAuth
	I0315 07:07:44.751785   53245 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:07:44.751961   53245 config.go:182] Loaded profile config "kubernetes-upgrade-294072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:07:44.752044   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:44.755708   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.849722   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:44.849753   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:44.850144   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:44.850404   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:44.850577   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:44.850751   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:44.850966   53245 main.go:141] libmachine: Using SSH client type: native
	I0315 07:07:44.851148   53245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:07:44.851168   53245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:07:45.417916   53097 main.go:141] libmachine: (cert-expiration-266938) Calling .GetIP
	I0315 07:07:45.421183   53097 main.go:141] libmachine: (cert-expiration-266938) DBG | domain cert-expiration-266938 has defined MAC address 52:54:00:ab:34:21 in network mk-cert-expiration-266938
	I0315 07:07:45.421763   53097 main.go:141] libmachine: (cert-expiration-266938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:34:21", ip: ""} in network mk-cert-expiration-266938: {Iface:virbr4 ExpiryTime:2024-03-15 08:07:33 +0000 UTC Type:0 Mac:52:54:00:ab:34:21 Iaid: IPaddr:192.168.72.56 Prefix:24 Hostname:cert-expiration-266938 Clientid:01:52:54:00:ab:34:21}
	I0315 07:07:45.421784   53097 main.go:141] libmachine: (cert-expiration-266938) DBG | domain cert-expiration-266938 has defined IP address 192.168.72.56 and MAC address 52:54:00:ab:34:21 in network mk-cert-expiration-266938
	I0315 07:07:45.422026   53097 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:07:45.426311   53097 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:07:45.439896   53097 kubeadm.go:877] updating cluster {Name:cert-expiration-266938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:cert-expiration-266938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:07:45.440018   53097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:07:45.440069   53097 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:07:45.472046   53097 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:07:45.472105   53097 ssh_runner.go:195] Run: which lz4
	I0315 07:07:45.476542   53097 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:07:45.481045   53097 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:07:45.481073   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:07:47.240866   53097 crio.go:444] duration metric: took 1.764366543s to copy over tarball
	I0315 07:07:47.240925   53097 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:07:49.846244   53097 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.605287905s)
	I0315 07:07:49.846267   53097 crio.go:451] duration metric: took 2.605381113s to extract the tarball
	I0315 07:07:49.846274   53097 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:07:49.890138   53097 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:07:49.942147   53097 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:07:49.942158   53097 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:07:49.942164   53097 kubeadm.go:928] updating node { 192.168.72.56 8443 v1.28.4 crio true true} ...
	I0315 07:07:49.942278   53097 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-266938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-266938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:07:49.942338   53097 ssh_runner.go:195] Run: crio config
	I0315 07:07:49.997884   53097 cni.go:84] Creating CNI manager for ""
	I0315 07:07:49.997900   53097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:07:49.997913   53097 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:07:49.997935   53097 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-266938 NodeName:cert-expiration-266938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:07:49.998129   53097 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-266938"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:07:49.998194   53097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:07:50.009343   53097 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:07:50.009407   53097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:07:50.020521   53097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0315 07:07:50.044109   53097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:07:50.064110   53097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0315 07:07:50.086137   53097 ssh_runner.go:195] Run: grep 192.168.72.56	control-plane.minikube.internal$ /etc/hosts
	I0315 07:07:50.092212   53097 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:07:50.106489   53097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:07:50.246311   53097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:07:45.884873   53799 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:07:45.884916   53799 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 07:07:45.884927   53799 cache.go:56] Caching tarball of preloaded images
	I0315 07:07:45.885016   53799 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:07:45.885040   53799 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 07:07:45.885169   53799 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:07:45.885192   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json: {Name:mk282299da90236b026435d5900111e6e36224d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:45.885379   53799 start.go:360] acquireMachinesLock for old-k8s-version-981420: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:07:52.054327   53245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:07:52.054370   53245 machine.go:97] duration metric: took 8.29807045s to provisionDockerMachine
	I0315 07:07:52.054384   53245 start.go:293] postStartSetup for "kubernetes-upgrade-294072" (driver="kvm2")
	I0315 07:07:52.054400   53245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:07:52.054452   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:52.054880   53245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:07:52.054916   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:52.057797   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.058240   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:52.058271   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.058397   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:52.058594   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:52.058763   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:52.058899   53245 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:07:52.309737   53621 start.go:364] duration metric: took 10.629499381s to acquireMachinesLock for "cert-options-559541"
	I0315 07:07:52.309791   53621 start.go:93] Provisioning new machine with config: &{Name:cert-options-559541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:cert-options-559541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:07:52.309918   53621 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:07:52.152247   53245 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:07:52.156683   53245 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:07:52.156705   53245 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:07:52.156769   53245 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:07:52.156845   53245 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:07:52.156938   53245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:07:52.167720   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:07:52.198913   53245 start.go:296] duration metric: took 144.508806ms for postStartSetup
	I0315 07:07:52.198967   53245 fix.go:56] duration metric: took 8.46914064s for fixHost
	I0315 07:07:52.199000   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:52.201881   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.202181   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:52.202212   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.202335   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:52.202541   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:52.202689   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:52.202806   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:52.202941   53245 main.go:141] libmachine: Using SSH client type: native
	I0315 07:07:52.203103   53245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0315 07:07:52.203112   53245 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:07:52.309593   53245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710486472.300406750
	
	I0315 07:07:52.309616   53245 fix.go:216] guest clock: 1710486472.300406750
	I0315 07:07:52.309624   53245 fix.go:229] Guest: 2024-03-15 07:07:52.30040675 +0000 UTC Remote: 2024-03-15 07:07:52.198979014 +0000 UTC m=+40.139592212 (delta=101.427736ms)
	I0315 07:07:52.309643   53245 fix.go:200] guest clock delta is within tolerance: 101.427736ms
	I0315 07:07:52.309650   53245 start.go:83] releasing machines lock for "kubernetes-upgrade-294072", held for 8.579882573s
	I0315 07:07:52.309679   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:52.309972   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:07:52.312714   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.313097   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:52.313153   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.313318   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:52.313864   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:52.314100   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .DriverName
	I0315 07:07:52.314187   53245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:07:52.314244   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:52.314498   53245 ssh_runner.go:195] Run: cat /version.json
	I0315 07:07:52.314520   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHHostname
	I0315 07:07:52.317067   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.317402   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.317504   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:52.317543   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.317685   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:52.317846   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:52.317870   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:52.317902   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:52.318041   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHPort
	I0315 07:07:52.318053   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:52.318217   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHKeyPath
	I0315 07:07:52.318218   53245 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:07:52.318364   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetSSHUsername
	I0315 07:07:52.318482   53245 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/kubernetes-upgrade-294072/id_rsa Username:docker}
	I0315 07:07:52.442460   53245 ssh_runner.go:195] Run: systemctl --version
	I0315 07:07:52.449160   53245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:07:52.648337   53245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:07:52.656726   53245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:07:52.656803   53245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:07:52.671185   53245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 07:07:52.671210   53245 start.go:494] detecting cgroup driver to use...
	I0315 07:07:52.671279   53245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:07:52.698008   53245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:07:52.714791   53245 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:07:52.714857   53245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:07:52.732145   53245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:07:52.749212   53245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:07:52.901033   53245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:07:53.061438   53245 docker.go:233] disabling docker service ...
	I0315 07:07:53.061512   53245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:07:53.089917   53245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:07:53.110890   53245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:07:53.280990   53245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:07:53.448252   53245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:07:53.463704   53245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:07:53.490973   53245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:07:53.491052   53245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:07:53.504699   53245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:07:53.504778   53245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:07:53.517336   53245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:07:53.530069   53245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:07:53.546194   53245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:07:53.560901   53245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:07:53.573888   53245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:07:53.586055   53245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:07:53.789846   53245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:07:54.279204   53245 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:07:54.279279   53245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:07:54.285391   53245 start.go:562] Will wait 60s for crictl version
	I0315 07:07:54.285467   53245 ssh_runner.go:195] Run: which crictl
	I0315 07:07:54.289934   53245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:07:54.341831   53245 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:07:54.341925   53245 ssh_runner.go:195] Run: crio --version
	I0315 07:07:54.384414   53245 ssh_runner.go:195] Run: crio --version
	I0315 07:07:54.425056   53245 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:07:50.265315   53097 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938 for IP: 192.168.72.56
	I0315 07:07:50.265332   53097 certs.go:194] generating shared ca certs ...
	I0315 07:07:50.265352   53097 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.265535   53097 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:07:50.265591   53097 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:07:50.265598   53097 certs.go:256] generating profile certs ...
	I0315 07:07:50.265678   53097 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.key
	I0315 07:07:50.265721   53097 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.crt with IP's: []
	I0315 07:07:50.353803   53097 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.crt ...
	I0315 07:07:50.353830   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.crt: {Name:mk62185eaa249aba3818ef93a037c725f722717e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.354021   53097 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.key ...
	I0315 07:07:50.354033   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/client.key: {Name:mkf83f1db0bfeb08bd40da5979ed06a147f1c9c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.354146   53097 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key.dea9853b
	I0315 07:07:50.354163   53097 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt.dea9853b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.56]
	I0315 07:07:50.519186   53097 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt.dea9853b ...
	I0315 07:07:50.519201   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt.dea9853b: {Name:mkf81a5855de020964974f408bdc8d5a2b361e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.519370   53097 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key.dea9853b ...
	I0315 07:07:50.519378   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key.dea9853b: {Name:mk738269ac5ac7264e29309967f65cf6999192b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.519448   53097 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt.dea9853b -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt
	I0315 07:07:50.519514   53097 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key.dea9853b -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key
	I0315 07:07:50.519560   53097 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.key
	I0315 07:07:50.519570   53097 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.crt with IP's: []
	I0315 07:07:50.658466   53097 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.crt ...
	I0315 07:07:50.658480   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.crt: {Name:mk5951b963c888eb854419398187eae7a766c3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.675921   53097 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.key ...
	I0315 07:07:50.675943   53097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.key: {Name:mkc18cc5017e057685e2bfe9aece7793c8c12a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:50.676148   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:07:50.676180   53097 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:07:50.676186   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:07:50.676217   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:07:50.676235   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:07:50.676250   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:07:50.676285   53097 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:07:50.676874   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:07:50.706846   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:07:50.735773   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:07:50.764597   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:07:50.793533   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:07:50.823354   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:07:50.855595   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:07:50.887105   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/cert-expiration-266938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:07:50.919321   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:07:50.952862   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:07:50.984584   53097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:07:51.014002   53097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:07:51.034791   53097 ssh_runner.go:195] Run: openssl version
	I0315 07:07:51.041380   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:07:51.054616   53097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:07:51.060034   53097 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:07:51.060083   53097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:07:51.066518   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:07:51.079818   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:07:51.095340   53097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:07:51.102547   53097 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:07:51.102627   53097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:07:51.109489   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:07:51.124953   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:07:51.150600   53097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:51.166740   53097 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:51.166798   53097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:51.183567   53097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:07:51.198463   53097 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:07:51.204162   53097 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:07:51.204223   53097 kubeadm.go:391] StartCluster: {Name:cert-expiration-266938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:cert-expiration-266938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:07:51.204307   53097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:07:51.204380   53097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:07:51.246710   53097 cri.go:89] found id: ""
	I0315 07:07:51.246758   53097 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:07:51.258732   53097 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:07:51.271201   53097 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:07:51.283520   53097 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:07:51.283529   53097 kubeadm.go:156] found existing configuration files:
	
	I0315 07:07:51.283582   53097 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:07:51.294734   53097 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:07:51.294797   53097 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:07:51.307069   53097 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:07:51.318806   53097 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:07:51.318879   53097 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:07:51.330545   53097 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:07:51.341946   53097 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:07:51.342006   53097 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:07:51.353681   53097 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:07:51.365118   53097 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:07:51.365193   53097 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:07:51.376798   53097 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:07:51.660027   53097 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:07:52.312206   53621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0315 07:07:52.312477   53621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:07:52.312530   53621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:07:52.329028   53621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0315 07:07:52.329424   53621 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:07:52.329977   53621 main.go:141] libmachine: Using API Version  1
	I0315 07:07:52.329990   53621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:07:52.330332   53621 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:07:52.330516   53621 main.go:141] libmachine: (cert-options-559541) Calling .GetMachineName
	I0315 07:07:52.330656   53621 main.go:141] libmachine: (cert-options-559541) Calling .DriverName
	I0315 07:07:52.330811   53621 start.go:159] libmachine.API.Create for "cert-options-559541" (driver="kvm2")
	I0315 07:07:52.330843   53621 client.go:168] LocalClient.Create starting
	I0315 07:07:52.330871   53621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:07:52.330906   53621 main.go:141] libmachine: Decoding PEM data...
	I0315 07:07:52.330923   53621 main.go:141] libmachine: Parsing certificate...
	I0315 07:07:52.330971   53621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:07:52.330984   53621 main.go:141] libmachine: Decoding PEM data...
	I0315 07:07:52.330993   53621 main.go:141] libmachine: Parsing certificate...
	I0315 07:07:52.331005   53621 main.go:141] libmachine: Running pre-create checks...
	I0315 07:07:52.331009   53621 main.go:141] libmachine: (cert-options-559541) Calling .PreCreateCheck
	I0315 07:07:52.331332   53621 main.go:141] libmachine: (cert-options-559541) Calling .GetConfigRaw
	I0315 07:07:52.331755   53621 main.go:141] libmachine: Creating machine...
	I0315 07:07:52.331763   53621 main.go:141] libmachine: (cert-options-559541) Calling .Create
	I0315 07:07:52.331917   53621 main.go:141] libmachine: (cert-options-559541) Creating KVM machine...
	I0315 07:07:52.333149   53621 main.go:141] libmachine: (cert-options-559541) DBG | found existing default KVM network
	I0315 07:07:52.334052   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.333902   53833 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f4:5b:03} reservation:<nil>}
	I0315 07:07:52.334844   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.334752   53833 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027e4a0}
	I0315 07:07:52.334857   53621 main.go:141] libmachine: (cert-options-559541) DBG | created network xml: 
	I0315 07:07:52.334867   53621 main.go:141] libmachine: (cert-options-559541) DBG | <network>
	I0315 07:07:52.334875   53621 main.go:141] libmachine: (cert-options-559541) DBG |   <name>mk-cert-options-559541</name>
	I0315 07:07:52.334884   53621 main.go:141] libmachine: (cert-options-559541) DBG |   <dns enable='no'/>
	I0315 07:07:52.334894   53621 main.go:141] libmachine: (cert-options-559541) DBG |   
	I0315 07:07:52.334904   53621 main.go:141] libmachine: (cert-options-559541) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0315 07:07:52.334911   53621 main.go:141] libmachine: (cert-options-559541) DBG |     <dhcp>
	I0315 07:07:52.334920   53621 main.go:141] libmachine: (cert-options-559541) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0315 07:07:52.334930   53621 main.go:141] libmachine: (cert-options-559541) DBG |     </dhcp>
	I0315 07:07:52.334938   53621 main.go:141] libmachine: (cert-options-559541) DBG |   </ip>
	I0315 07:07:52.334943   53621 main.go:141] libmachine: (cert-options-559541) DBG |   
	I0315 07:07:52.334951   53621 main.go:141] libmachine: (cert-options-559541) DBG | </network>
	I0315 07:07:52.334956   53621 main.go:141] libmachine: (cert-options-559541) DBG | 
	I0315 07:07:52.340300   53621 main.go:141] libmachine: (cert-options-559541) DBG | trying to create private KVM network mk-cert-options-559541 192.168.50.0/24...
	I0315 07:07:52.415443   53621 main.go:141] libmachine: (cert-options-559541) DBG | private KVM network mk-cert-options-559541 192.168.50.0/24 created
	I0315 07:07:52.415466   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.415398   53833 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:07:52.415485   53621 main.go:141] libmachine: (cert-options-559541) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541 ...
	I0315 07:07:52.415496   53621 main.go:141] libmachine: (cert-options-559541) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:07:52.415582   53621 main.go:141] libmachine: (cert-options-559541) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:07:52.663776   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.663638   53833 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541/id_rsa...
	I0315 07:07:52.747492   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.747340   53833 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541/cert-options-559541.rawdisk...
	I0315 07:07:52.747514   53621 main.go:141] libmachine: (cert-options-559541) DBG | Writing magic tar header
	I0315 07:07:52.747530   53621 main.go:141] libmachine: (cert-options-559541) DBG | Writing SSH key tar header
	I0315 07:07:52.747613   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:52.747536   53833 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541 ...
	I0315 07:07:52.747674   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541
	I0315 07:07:52.747689   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:07:52.747705   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:07:52.747717   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541 (perms=drwx------)
	I0315 07:07:52.747778   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:07:52.747809   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:07:52.747821   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:07:52.747836   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:07:52.747848   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:07:52.747858   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:07:52.747865   53621 main.go:141] libmachine: (cert-options-559541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:07:52.747878   53621 main.go:141] libmachine: (cert-options-559541) Creating domain...
	I0315 07:07:52.747910   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:07:52.747930   53621 main.go:141] libmachine: (cert-options-559541) DBG | Checking permissions on dir: /home
	I0315 07:07:52.747941   53621 main.go:141] libmachine: (cert-options-559541) DBG | Skipping /home - not owner
	I0315 07:07:52.749085   53621 main.go:141] libmachine: (cert-options-559541) define libvirt domain using xml: 
	I0315 07:07:52.749095   53621 main.go:141] libmachine: (cert-options-559541) <domain type='kvm'>
	I0315 07:07:52.749103   53621 main.go:141] libmachine: (cert-options-559541)   <name>cert-options-559541</name>
	I0315 07:07:52.749109   53621 main.go:141] libmachine: (cert-options-559541)   <memory unit='MiB'>2048</memory>
	I0315 07:07:52.749118   53621 main.go:141] libmachine: (cert-options-559541)   <vcpu>2</vcpu>
	I0315 07:07:52.749124   53621 main.go:141] libmachine: (cert-options-559541)   <features>
	I0315 07:07:52.749130   53621 main.go:141] libmachine: (cert-options-559541)     <acpi/>
	I0315 07:07:52.749151   53621 main.go:141] libmachine: (cert-options-559541)     <apic/>
	I0315 07:07:52.749163   53621 main.go:141] libmachine: (cert-options-559541)     <pae/>
	I0315 07:07:52.749168   53621 main.go:141] libmachine: (cert-options-559541)     
	I0315 07:07:52.749175   53621 main.go:141] libmachine: (cert-options-559541)   </features>
	I0315 07:07:52.749181   53621 main.go:141] libmachine: (cert-options-559541)   <cpu mode='host-passthrough'>
	I0315 07:07:52.749188   53621 main.go:141] libmachine: (cert-options-559541)   
	I0315 07:07:52.749193   53621 main.go:141] libmachine: (cert-options-559541)   </cpu>
	I0315 07:07:52.749199   53621 main.go:141] libmachine: (cert-options-559541)   <os>
	I0315 07:07:52.749203   53621 main.go:141] libmachine: (cert-options-559541)     <type>hvm</type>
	I0315 07:07:52.749210   53621 main.go:141] libmachine: (cert-options-559541)     <boot dev='cdrom'/>
	I0315 07:07:52.749214   53621 main.go:141] libmachine: (cert-options-559541)     <boot dev='hd'/>
	I0315 07:07:52.749218   53621 main.go:141] libmachine: (cert-options-559541)     <bootmenu enable='no'/>
	I0315 07:07:52.749221   53621 main.go:141] libmachine: (cert-options-559541)   </os>
	I0315 07:07:52.749227   53621 main.go:141] libmachine: (cert-options-559541)   <devices>
	I0315 07:07:52.749232   53621 main.go:141] libmachine: (cert-options-559541)     <disk type='file' device='cdrom'>
	I0315 07:07:52.749243   53621 main.go:141] libmachine: (cert-options-559541)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541/boot2docker.iso'/>
	I0315 07:07:52.749250   53621 main.go:141] libmachine: (cert-options-559541)       <target dev='hdc' bus='scsi'/>
	I0315 07:07:52.749255   53621 main.go:141] libmachine: (cert-options-559541)       <readonly/>
	I0315 07:07:52.749259   53621 main.go:141] libmachine: (cert-options-559541)     </disk>
	I0315 07:07:52.749266   53621 main.go:141] libmachine: (cert-options-559541)     <disk type='file' device='disk'>
	I0315 07:07:52.749273   53621 main.go:141] libmachine: (cert-options-559541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:07:52.749284   53621 main.go:141] libmachine: (cert-options-559541)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/cert-options-559541/cert-options-559541.rawdisk'/>
	I0315 07:07:52.749294   53621 main.go:141] libmachine: (cert-options-559541)       <target dev='hda' bus='virtio'/>
	I0315 07:07:52.749300   53621 main.go:141] libmachine: (cert-options-559541)     </disk>
	I0315 07:07:52.749305   53621 main.go:141] libmachine: (cert-options-559541)     <interface type='network'>
	I0315 07:07:52.749314   53621 main.go:141] libmachine: (cert-options-559541)       <source network='mk-cert-options-559541'/>
	I0315 07:07:52.749319   53621 main.go:141] libmachine: (cert-options-559541)       <model type='virtio'/>
	I0315 07:07:52.749326   53621 main.go:141] libmachine: (cert-options-559541)     </interface>
	I0315 07:07:52.749332   53621 main.go:141] libmachine: (cert-options-559541)     <interface type='network'>
	I0315 07:07:52.749339   53621 main.go:141] libmachine: (cert-options-559541)       <source network='default'/>
	I0315 07:07:52.749344   53621 main.go:141] libmachine: (cert-options-559541)       <model type='virtio'/>
	I0315 07:07:52.749350   53621 main.go:141] libmachine: (cert-options-559541)     </interface>
	I0315 07:07:52.749355   53621 main.go:141] libmachine: (cert-options-559541)     <serial type='pty'>
	I0315 07:07:52.749363   53621 main.go:141] libmachine: (cert-options-559541)       <target port='0'/>
	I0315 07:07:52.749372   53621 main.go:141] libmachine: (cert-options-559541)     </serial>
	I0315 07:07:52.749379   53621 main.go:141] libmachine: (cert-options-559541)     <console type='pty'>
	I0315 07:07:52.749398   53621 main.go:141] libmachine: (cert-options-559541)       <target type='serial' port='0'/>
	I0315 07:07:52.749410   53621 main.go:141] libmachine: (cert-options-559541)     </console>
	I0315 07:07:52.749417   53621 main.go:141] libmachine: (cert-options-559541)     <rng model='virtio'>
	I0315 07:07:52.749424   53621 main.go:141] libmachine: (cert-options-559541)       <backend model='random'>/dev/random</backend>
	I0315 07:07:52.749430   53621 main.go:141] libmachine: (cert-options-559541)     </rng>
	I0315 07:07:52.749435   53621 main.go:141] libmachine: (cert-options-559541)     
	I0315 07:07:52.749440   53621 main.go:141] libmachine: (cert-options-559541)     
	I0315 07:07:52.749446   53621 main.go:141] libmachine: (cert-options-559541)   </devices>
	I0315 07:07:52.749452   53621 main.go:141] libmachine: (cert-options-559541) </domain>
	I0315 07:07:52.749457   53621 main.go:141] libmachine: (cert-options-559541) 
	I0315 07:07:52.754395   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:19:92:95 in network default
	I0315 07:07:52.755165   53621 main.go:141] libmachine: (cert-options-559541) Ensuring networks are active...
	I0315 07:07:52.755185   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:52.756024   53621 main.go:141] libmachine: (cert-options-559541) Ensuring network default is active
	I0315 07:07:52.756417   53621 main.go:141] libmachine: (cert-options-559541) Ensuring network mk-cert-options-559541 is active
	I0315 07:07:52.757033   53621 main.go:141] libmachine: (cert-options-559541) Getting domain xml...
	I0315 07:07:52.757845   53621 main.go:141] libmachine: (cert-options-559541) Creating domain...
	I0315 07:07:54.065988   53621 main.go:141] libmachine: (cert-options-559541) Waiting to get IP...
	I0315 07:07:54.066811   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:54.067324   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:54.067365   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:54.067293   53833 retry.go:31] will retry after 283.329867ms: waiting for machine to come up
	I0315 07:07:54.351949   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:54.352526   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:54.352596   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:54.352530   53833 retry.go:31] will retry after 326.18129ms: waiting for machine to come up
	I0315 07:07:54.680112   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:54.680563   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:54.680583   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:54.680535   53833 retry.go:31] will retry after 351.761135ms: waiting for machine to come up
	I0315 07:07:55.034143   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:55.034596   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:55.034619   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:55.034566   53833 retry.go:31] will retry after 453.526218ms: waiting for machine to come up
	I0315 07:07:55.490359   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:55.490839   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:55.490937   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:55.490873   53833 retry.go:31] will retry after 581.977122ms: waiting for machine to come up
	I0315 07:07:56.075036   53621 main.go:141] libmachine: (cert-options-559541) DBG | domain cert-options-559541 has defined MAC address 52:54:00:b9:c9:2e in network mk-cert-options-559541
	I0315 07:07:56.075576   53621 main.go:141] libmachine: (cert-options-559541) DBG | unable to find current IP address of domain cert-options-559541 in network mk-cert-options-559541
	I0315 07:07:56.075591   53621 main.go:141] libmachine: (cert-options-559541) DBG | I0315 07:07:56.075538   53833 retry.go:31] will retry after 592.214659ms: waiting for machine to come up
	I0315 07:07:54.426483   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) Calling .GetIP
	I0315 07:07:54.429476   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:54.430001   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:3c:02", ip: ""} in network mk-kubernetes-upgrade-294072: {Iface:virbr1 ExpiryTime:2024-03-15 08:06:45 +0000 UTC Type:0 Mac:52:54:00:c6:3c:02 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:kubernetes-upgrade-294072 Clientid:01:52:54:00:c6:3c:02}
	I0315 07:07:54.430039   53245 main.go:141] libmachine: (kubernetes-upgrade-294072) DBG | domain kubernetes-upgrade-294072 has defined IP address 192.168.39.216 and MAC address 52:54:00:c6:3c:02 in network mk-kubernetes-upgrade-294072
	I0315 07:07:54.430253   53245 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:07:54.435158   53245 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:07:54.435256   53245 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:07:54.435297   53245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:07:54.487296   53245 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:07:54.487318   53245 crio.go:415] Images already preloaded, skipping extraction
	I0315 07:07:54.487365   53245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:07:54.530049   53245 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:07:54.530070   53245 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:07:54.530078   53245 kubeadm.go:928] updating node { 192.168.39.216 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:07:54.530195   53245 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-294072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:07:54.530282   53245 ssh_runner.go:195] Run: crio config
	I0315 07:07:54.606384   53245 cni.go:84] Creating CNI manager for ""
	I0315 07:07:54.606406   53245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:07:54.606425   53245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:07:54.606445   53245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-294072 NodeName:kubernetes-upgrade-294072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:07:54.606652   53245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-294072"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:07:54.606735   53245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:07:54.618414   53245 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:07:54.618477   53245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:07:54.630588   53245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0315 07:07:54.649740   53245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:07:54.674176   53245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0315 07:07:54.696176   53245 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I0315 07:07:54.701416   53245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:07:54.859572   53245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:07:54.928113   53245 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072 for IP: 192.168.39.216
	I0315 07:07:54.928138   53245 certs.go:194] generating shared ca certs ...
	I0315 07:07:54.928155   53245 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:54.928337   53245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:07:54.928393   53245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:07:54.928408   53245 certs.go:256] generating profile certs ...
	I0315 07:07:54.928559   53245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/client.key
	I0315 07:07:54.928632   53245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key.ed5b6e78
	I0315 07:07:54.928688   53245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key
	I0315 07:07:54.928834   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:07:54.928884   53245 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:07:54.928898   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:07:54.928931   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:07:54.928958   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:07:54.929005   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:07:54.929073   53245 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:07:54.929806   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:07:54.975538   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:07:55.085070   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:07:55.139509   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:07:55.244908   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 07:07:55.338293   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:07:55.379073   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:07:55.442923   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/kubernetes-upgrade-294072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:07:55.538018   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:07:55.579945   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:07:55.623875   53245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:07:55.664730   53245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:07:55.691612   53245 ssh_runner.go:195] Run: openssl version
	I0315 07:07:55.698567   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:07:55.718732   53245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:07:55.728192   53245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:07:55.728257   53245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:07:55.736000   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:07:55.760427   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:07:55.774078   53245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:55.781076   53245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:55.781147   53245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:07:55.792626   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:07:55.813366   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:07:55.832457   53245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:07:55.838626   53245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:07:55.838691   53245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:07:55.846912   53245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:07:55.872384   53245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:07:55.878808   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:07:55.885575   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:07:55.912179   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:07:55.929625   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:07:55.948313   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:07:55.962623   53245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:07:55.969591   53245 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-294072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-294072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:07:55.969692   53245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:07:55.969757   53245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:07:56.048914   53245 cri.go:89] found id: "bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3"
	I0315 07:07:56.048946   53245 cri.go:89] found id: "747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb"
	I0315 07:07:56.048952   53245 cri.go:89] found id: "01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93"
	I0315 07:07:56.048960   53245 cri.go:89] found id: "a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f"
	I0315 07:07:56.048964   53245 cri.go:89] found id: "4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352"
	I0315 07:07:56.048969   53245 cri.go:89] found id: "30b57c72c7f262263af2666e7f587f7e8ccc068582e53862a686b0cc88c4f81d"
	I0315 07:07:56.048973   53245 cri.go:89] found id: "38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb"
	I0315 07:07:56.048976   53245 cri.go:89] found id: "4d5cc3e9f89c6e0b02582761dbdda06c8c3c028aa094daf053ea832f160233e2"
	I0315 07:07:56.048981   53245 cri.go:89] found id: "3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676"
	I0315 07:07:56.048988   53245 cri.go:89] found id: "79b72b7e55ede81bb6c09b034b8533a84f65d7d9427170d57f04fb471f4bcd28"
	I0315 07:07:56.048996   53245 cri.go:89] found id: "217484aa44ebd599c8cb3df9b397b30bdc5c12ece992f170235f356c90cca752"
	I0315 07:07:56.049001   53245 cri.go:89] found id: ""
	I0315 07:07:56.049047   53245 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.360917158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710486507360889883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5daf2119-7a26-4841-8574-46087c333575 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.365040339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=050b19d2-6721-420d-8220-53152b11ff8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.365159653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=050b19d2-6721-420d-8220-53152b11ff8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.365483333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c3e9513342e5f56af6a005a8eee62f5f38b688356bca490222d2640c5ed93e2,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710486501030240209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca29ab68bdad1a1f0294bca920b8a3d1cbbf6bc6ed5f422b835df901ae79ef71,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710486497378766009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0e2a82700527a75e25d14a1fbfd898d809d43bfddbb63f0d13fe1e541945a7,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710486497393652468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb48275fd72bcf37d997bf5650a90a9638184101824e26b329c99a955b3423ee,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710486497387038714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f622f524a28b8c3240c5ee5005ae5a3813bd195e5bc7e3fbed4aae472110f4,PodSandboxId:47a32c6df465f92061192a10b9a25d875f5826123067ac612d85f8430b658a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710486485517431844,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710486484648226793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae396d2326a5fe523023603345b90c1ecf2aef92a297003ccb8dc6bc70e3387,PodSandboxId:12264f837b1cd0b369d6c2cfce386b48a8393f8f9c26dc36d833781911a56d4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482916707127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3594b36f28a7c086ebd2223be190f1432cfd72188e33bfb3744118ff0cbb0720,PodSandboxId:2d8f4d5366e6f6fe544dde625d813a1a2ea0dda0d9d4994ee4e90b6a08af558e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482823463983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d
6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881abf62da23e1c80ff79aa43cf362c73bd2d2e58620a08912cb4ea904a119c,PodSandboxId:ce0de8c0e430111a3cd99cb1ad22b4cb9abd9b1a07e07fa3da6eeea4ee412651,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710486480836049956,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710486475253669379,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710486475218248946,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710486475201107204,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f,PodSandboxId:b5d3fa333195ee9d4fdf8e944945476b40820846d16952abd1ce6cc29c2f650c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442924292828,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352,PodSandboxId:d4b3990c12a5a666140ccffff7200edb87610980b35b201c640efd34f957e704,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442889299468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb,PodSandboxId:12a320e8520028c139c62e52916776726e014d07ab483540eae5cb
b1e96939f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710486442286225469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676,PodSandboxId:b3f71b9deefbc3ba6825c7caec43dcf3f114db6f3b3173db0031208c8a86049b,Metadata:&ContainerMeta
data{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710486423279790791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=050b19d2-6721-420d-8220-53152b11ff8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.418609326Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db139b61-4068-4b6f-8af5-949985ae2be0 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.418719535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db139b61-4068-4b6f-8af5-949985ae2be0 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.420112894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96adf0d2-437f-4b19-9767-4d7e15b09efb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.420688318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710486507420660170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96adf0d2-437f-4b19-9767-4d7e15b09efb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.421130022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a90d1ba-4df7-438d-8eb0-593ba6c7db19 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.421240428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a90d1ba-4df7-438d-8eb0-593ba6c7db19 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.421700191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c3e9513342e5f56af6a005a8eee62f5f38b688356bca490222d2640c5ed93e2,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710486501030240209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca29ab68bdad1a1f0294bca920b8a3d1cbbf6bc6ed5f422b835df901ae79ef71,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710486497378766009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0e2a82700527a75e25d14a1fbfd898d809d43bfddbb63f0d13fe1e541945a7,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710486497393652468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb48275fd72bcf37d997bf5650a90a9638184101824e26b329c99a955b3423ee,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710486497387038714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f622f524a28b8c3240c5ee5005ae5a3813bd195e5bc7e3fbed4aae472110f4,PodSandboxId:47a32c6df465f92061192a10b9a25d875f5826123067ac612d85f8430b658a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710486485517431844,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710486484648226793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae396d2326a5fe523023603345b90c1ecf2aef92a297003ccb8dc6bc70e3387,PodSandboxId:12264f837b1cd0b369d6c2cfce386b48a8393f8f9c26dc36d833781911a56d4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482916707127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3594b36f28a7c086ebd2223be190f1432cfd72188e33bfb3744118ff0cbb0720,PodSandboxId:2d8f4d5366e6f6fe544dde625d813a1a2ea0dda0d9d4994ee4e90b6a08af558e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482823463983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d
6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881abf62da23e1c80ff79aa43cf362c73bd2d2e58620a08912cb4ea904a119c,PodSandboxId:ce0de8c0e430111a3cd99cb1ad22b4cb9abd9b1a07e07fa3da6eeea4ee412651,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710486480836049956,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710486475253669379,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710486475218248946,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710486475201107204,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f,PodSandboxId:b5d3fa333195ee9d4fdf8e944945476b40820846d16952abd1ce6cc29c2f650c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442924292828,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352,PodSandboxId:d4b3990c12a5a666140ccffff7200edb87610980b35b201c640efd34f957e704,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442889299468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb,PodSandboxId:12a320e8520028c139c62e52916776726e014d07ab483540eae5cb
b1e96939f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710486442286225469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676,PodSandboxId:b3f71b9deefbc3ba6825c7caec43dcf3f114db6f3b3173db0031208c8a86049b,Metadata:&ContainerMeta
data{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710486423279790791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a90d1ba-4df7-438d-8eb0-593ba6c7db19 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.485283146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ffbbe3e-4526-444d-bf3a-81e8c69d3233 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.485362784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ffbbe3e-4526-444d-bf3a-81e8c69d3233 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.487451624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8d9e363-a513-4cb4-9322-019c2a96e199 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.488133900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710486507488095970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8d9e363-a513-4cb4-9322-019c2a96e199 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.488981560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a368773c-24b2-42f8-90c6-a316a87da7ca name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.489084117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a368773c-24b2-42f8-90c6-a316a87da7ca name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.489578775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c3e9513342e5f56af6a005a8eee62f5f38b688356bca490222d2640c5ed93e2,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710486501030240209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca29ab68bdad1a1f0294bca920b8a3d1cbbf6bc6ed5f422b835df901ae79ef71,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710486497378766009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0e2a82700527a75e25d14a1fbfd898d809d43bfddbb63f0d13fe1e541945a7,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710486497393652468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb48275fd72bcf37d997bf5650a90a9638184101824e26b329c99a955b3423ee,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710486497387038714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f622f524a28b8c3240c5ee5005ae5a3813bd195e5bc7e3fbed4aae472110f4,PodSandboxId:47a32c6df465f92061192a10b9a25d875f5826123067ac612d85f8430b658a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710486485517431844,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710486484648226793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae396d2326a5fe523023603345b90c1ecf2aef92a297003ccb8dc6bc70e3387,PodSandboxId:12264f837b1cd0b369d6c2cfce386b48a8393f8f9c26dc36d833781911a56d4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482916707127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3594b36f28a7c086ebd2223be190f1432cfd72188e33bfb3744118ff0cbb0720,PodSandboxId:2d8f4d5366e6f6fe544dde625d813a1a2ea0dda0d9d4994ee4e90b6a08af558e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482823463983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d
6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881abf62da23e1c80ff79aa43cf362c73bd2d2e58620a08912cb4ea904a119c,PodSandboxId:ce0de8c0e430111a3cd99cb1ad22b4cb9abd9b1a07e07fa3da6eeea4ee412651,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710486480836049956,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710486475253669379,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710486475218248946,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710486475201107204,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f,PodSandboxId:b5d3fa333195ee9d4fdf8e944945476b40820846d16952abd1ce6cc29c2f650c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442924292828,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352,PodSandboxId:d4b3990c12a5a666140ccffff7200edb87610980b35b201c640efd34f957e704,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442889299468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb,PodSandboxId:12a320e8520028c139c62e52916776726e014d07ab483540eae5cb
b1e96939f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710486442286225469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676,PodSandboxId:b3f71b9deefbc3ba6825c7caec43dcf3f114db6f3b3173db0031208c8a86049b,Metadata:&ContainerMeta
data{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710486423279790791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a368773c-24b2-42f8-90c6-a316a87da7ca name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.540812706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82b3c7ca-3217-4cc0-8456-ae618dbda51e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.540892809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82b3c7ca-3217-4cc0-8456-ae618dbda51e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.542461493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61fc4539-bf6d-409e-9f11-1b5b7d1f5ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.542944252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710486507542910658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61fc4539-bf6d-409e-9f11-1b5b7d1f5ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.543768812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed85dd6e-0a12-4358-a6d4-b28ddad55ba6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.543849501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed85dd6e-0a12-4358-a6d4-b28ddad55ba6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:08:27 kubernetes-upgrade-294072 crio[2172]: time="2024-03-15 07:08:27.545055261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c3e9513342e5f56af6a005a8eee62f5f38b688356bca490222d2640c5ed93e2,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710486501030240209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca29ab68bdad1a1f0294bca920b8a3d1cbbf6bc6ed5f422b835df901ae79ef71,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710486497378766009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0e2a82700527a75e25d14a1fbfd898d809d43bfddbb63f0d13fe1e541945a7,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710486497393652468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCo
unt: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb48275fd72bcf37d997bf5650a90a9638184101824e26b329c99a955b3423ee,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710486497387038714,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f622f524a28b8c3240c5ee5005ae5a3813bd195e5bc7e3fbed4aae472110f4,PodSandboxId:47a32c6df465f92061192a10b9a25d875f5826123067ac612d85f8430b658a72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710486485517431844,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d,PodSandboxId:cf06821a641247b4dd7ff4800383af264f7280a09f5547d91d6283d9539bc22b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710486484648226793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18d7a6-f304-4f3e-b62c-7f454837676b,},Annotations:map[string]string{io.kubernetes.container.hash: e63d04cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae396d2326a5fe523023603345b90c1ecf2aef92a297003ccb8dc6bc70e3387,PodSandboxId:12264f837b1cd0b369d6c2cfce386b48a8393f8f9c26dc36d833781911a56d4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482916707127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3594b36f28a7c086ebd2223be190f1432cfd72188e33bfb3744118ff0cbb0720,PodSandboxId:2d8f4d5366e6f6fe544dde625d813a1a2ea0dda0d9d4994ee4e90b6a08af558e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710486482823463983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d
6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881abf62da23e1c80ff79aa43cf362c73bd2d2e58620a08912cb4ea904a119c,PodSandboxId:ce0de8c0e430111a3cd99cb1ad22b4cb9abd9b1a07e07fa3da6eeea4ee412651,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710486480836049956,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3,PodSandboxId:94b15ecebcdd9fccabdd526f407a98465ed0fc4854cc9dcd9f5ab37d7721c4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710486475253669379,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb48ab287c8bf6ddfa065bfd312aa16,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb,PodSandboxId:eb1e7895358832337a23335142f4e3b1d3894e671a3fc0f18383e3dbc4d08041,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710486475218248946,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b796e087751848c80e9997e9710f857c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93,PodSandboxId:968828d6fe055f2a2cfc2c15aaf2b511ee6c19be894617e971fdcc24844194d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710486475201107204,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d0162999e430197bd25518d084e9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f2a88c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f,PodSandboxId:b5d3fa333195ee9d4fdf8e944945476b40820846d16952abd1ce6cc29c2f650c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442924292828,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qsclc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338c1a51-a514-4256-86cb-14ec36eca10b,},Annotations:map[string]string{io.kubernetes.container.hash: fbe3932,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352,PodSandboxId:d4b3990c12a5a666140ccffff7200edb87610980b35b201c640efd34f957e704,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710486442889299468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kjtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe25b594-7ac2-40fc-9e32-e780d6dc90cd,},Annotations:map[string]string{io.kubernetes.container.hash: d2a87624,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb,PodSandboxId:12a320e8520028c139c62e52916776726e014d07ab483540eae5cb
b1e96939f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710486442286225469,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n5khh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13184da1-665b-428a-bdfb-e288a01f6256,},Annotations:map[string]string{io.kubernetes.container.hash: c62184bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676,PodSandboxId:b3f71b9deefbc3ba6825c7caec43dcf3f114db6f3b3173db0031208c8a86049b,Metadata:&ContainerMeta
data{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710486423279790791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-294072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986ea030a2b8a2e4de655015deead3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4d31ca61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed85dd6e-0a12-4358-a6d4-b28ddad55ba6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5c3e9513342e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 seconds ago        Running             storage-provisioner       2                   cf06821a64124       storage-provisioner
	9f0e2a8270052       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   10 seconds ago       Running             kube-controller-manager   2                   eb1e789535883       kube-controller-manager-kubernetes-upgrade-294072
	bb48275fd72bc       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   10 seconds ago       Running             kube-scheduler            2                   94b15ecebcdd9       kube-scheduler-kubernetes-upgrade-294072
	ca29ab68bdad1       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   10 seconds ago       Running             kube-apiserver            2                   968828d6fe055       kube-apiserver-kubernetes-upgrade-294072
	f1f622f524a28       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   22 seconds ago       Running             kube-proxy                1                   47a32c6df465f       kube-proxy-n5khh
	2802b82f25c1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago       Exited              storage-provisioner       1                   cf06821a64124       storage-provisioner
	8ae396d2326a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago       Running             coredns                   1                   12264f837b1cd       coredns-76f75df574-qsclc
	3594b36f28a7c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago       Running             coredns                   1                   2d8f4d5366e6f       coredns-76f75df574-kjtpd
	4881abf62da23       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   26 seconds ago       Running             etcd                      1                   ce0de8c0e4301       etcd-kubernetes-upgrade-294072
	bb20decd4d50a       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   32 seconds ago       Exited              kube-scheduler            1                   94b15ecebcdd9       kube-scheduler-kubernetes-upgrade-294072
	747f26973dcde       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   32 seconds ago       Exited              kube-controller-manager   1                   eb1e789535883       kube-controller-manager-kubernetes-upgrade-294072
	01a4b7c35bb22       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   32 seconds ago       Exited              kube-apiserver            1                   968828d6fe055       kube-apiserver-kubernetes-upgrade-294072
	a6e4b1b5813a4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   b5d3fa333195e       coredns-76f75df574-qsclc
	4f862895114b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d4b3990c12a5a       coredns-76f75df574-kjtpd
	38e36bbbf9b24       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   About a minute ago   Exited              kube-proxy                0                   12a320e852002       kube-proxy-n5khh
	3e907e5ceaa50       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   About a minute ago   Exited              etcd                      0                   b3f71b9deefbc       etcd-kubernetes-upgrade-294072
	
	
	==> coredns [3594b36f28a7c086ebd2223be190f1432cfd72188e33bfb3744118ff0cbb0720] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [4f862895114b81edccb88bd93bc8d8386bf6e947b6f65d6b158453fad8221352] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ae396d2326a5fe523023603345b90c1ecf2aef92a297003ccb8dc6bc70e3387] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a6e4b1b5813a42cf7fd9d93902c3cbb23ac6b63bd2b30b82827176895cf5903f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-294072
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-294072
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-294072
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:08:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:08:20 +0000   Fri, 15 Mar 2024 07:07:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:08:20 +0000   Fri, 15 Mar 2024 07:07:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:08:20 +0000   Fri, 15 Mar 2024 07:07:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:08:20 +0000   Fri, 15 Mar 2024 07:07:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    kubernetes-upgrade-294072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 08864a6e281b4ffab79f390200d99fa2
	  System UUID:                08864a6e-281b-4ffa-b79f-390200d99fa2
	  Boot ID:                    433f993b-bd48-4388-96cc-403e2f34076a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kjtpd                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 coredns-76f75df574-qsclc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 etcd-kubernetes-upgrade-294072                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-kubernetes-upgrade-294072             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-294072    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-n5khh                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-kubernetes-upgrade-294072             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 86s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasSufficientMemory
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           67s                node-controller  Node kubernetes-upgrade-294072 event: Registered Node kubernetes-upgrade-294072 in Controller
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11s (x8 over 12s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 12s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 12s)  kubelet          Node kubernetes-upgrade-294072 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.240620] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.085795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084315] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.213039] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.146372] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.328269] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +5.459964] systemd-fstab-generator[728]: Ignoring "noauto" option for root device
	[  +0.099114] kauditd_printk_skb: 130 callbacks suppressed
	[Mar15 07:07] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +9.150593] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.077095] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.937782] kauditd_printk_skb: 21 callbacks suppressed
	[ +30.606705] systemd-fstab-generator[2034]: Ignoring "noauto" option for root device
	[  +0.096211] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.067493] systemd-fstab-generator[2046]: Ignoring "noauto" option for root device
	[  +0.219506] systemd-fstab-generator[2060]: Ignoring "noauto" option for root device
	[  +0.158979] systemd-fstab-generator[2072]: Ignoring "noauto" option for root device
	[  +0.309060] systemd-fstab-generator[2096]: Ignoring "noauto" option for root device
	[  +1.120748] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[Mar15 07:08] kauditd_printk_skb: 158 callbacks suppressed
	[ +15.600459] systemd-fstab-generator[3144]: Ignoring "noauto" option for root device
	[  +0.095193] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.127053] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.253485] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	
	
	==> etcd [3e907e5ceaa50c87010e893261b2533da60876a7e55f63df9d9db710ba03b676] <==
	{"level":"info","ts":"2024-03-15T07:07:24.381929Z","caller":"traceutil/trace.go:171","msg":"trace[1493796011] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:366; }","duration":"533.675486ms","start":"2024-03-15T07:07:23.848249Z","end":"2024-03-15T07:07:24.381924Z","steps":["trace[1493796011] 'agreement among raft nodes before linearized reading'  (duration: 533.604252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:07:24.381948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:23.848184Z","time spent":"533.760066ms","remote":"127.0.0.1:59646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-15T07:07:24.382143Z","caller":"traceutil/trace.go:171","msg":"trace[481993002] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"530.348559ms","start":"2024-03-15T07:07:23.851759Z","end":"2024-03-15T07:07:24.382108Z","steps":["trace[481993002] 'process raft request'  (duration: 530.282174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:07:24.383133Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:23.851743Z","time spent":"531.197241ms","remote":"127.0.0.1:59800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3967,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:317 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3913 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2024-03-15T07:07:24.382154Z","caller":"traceutil/trace.go:171","msg":"trace[860178522] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"534.272436ms","start":"2024-03-15T07:07:23.847873Z","end":"2024-03-15T07:07:24.382145Z","steps":["trace[860178522] 'process raft request'  (duration: 150.213782ms)","trace[860178522] 'compare'  (duration: 383.51572ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:07:24.383363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:23.847856Z","time spent":"535.483295ms","remote":"127.0.0.1:59694","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":756,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-76f75df574-kjtpd.17bcde5b96d48e99\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-76f75df574-kjtpd.17bcde5b96d48e99\" value_size:668 lease:221395741242317321 >> failure:<>"}
	{"level":"warn","ts":"2024-03-15T07:07:24.889756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.80806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9444767778097093474 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5ba6cdbaba\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5ba6cdbaba\" value_size:668 lease:221395741242317321 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T07:07:24.890016Z","caller":"traceutil/trace.go:171","msg":"trace[129777140] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"502.331613ms","start":"2024-03-15T07:07:24.387665Z","end":"2024-03-15T07:07:24.889997Z","steps":["trace[129777140] 'process raft request'  (duration: 370.209332ms)","trace[129777140] 'compare'  (duration: 131.137386ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:07:24.890134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:24.387653Z","time spent":"502.439155ms","remote":"127.0.0.1:59694","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":756,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5ba6cdbaba\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5ba6cdbaba\" value_size:668 lease:221395741242317321 >> failure:<>"}
	{"level":"info","ts":"2024-03-15T07:07:25.242193Z","caller":"traceutil/trace.go:171","msg":"trace[84444655] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"347.600559ms","start":"2024-03-15T07:07:24.894577Z","end":"2024-03-15T07:07:25.242178Z","steps":["trace[84444655] 'process raft request'  (duration: 347.555279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:07:25.242424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:24.894479Z","time spent":"347.891529ms","remote":"127.0.0.1:59694","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":756,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5bb7ae49d5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-76f75df574-qsclc.17bcde5bb7ae49d5\" value_size:668 lease:221395741242317321 >> failure:<>"}
	{"level":"info","ts":"2024-03-15T07:07:25.242435Z","caller":"traceutil/trace.go:171","msg":"trace[2139302512] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"848.753496ms","start":"2024-03-15T07:07:24.393657Z","end":"2024-03-15T07:07:25.242411Z","steps":["trace[2139302512] 'process raft request'  (duration: 761.18945ms)","trace[2139302512] 'compare'  (duration: 87.167111ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:07:25.242709Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:07:24.393642Z","time spent":"849.028426ms","remote":"127.0.0.1:59800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4616,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-n5khh\" mod_revision:332 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-n5khh\" value_size:4565 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-n5khh\" > >"}
	{"level":"info","ts":"2024-03-15T07:07:25.538446Z","caller":"traceutil/trace.go:171","msg":"trace[965251420] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"279.360391ms","start":"2024-03-15T07:07:25.259069Z","end":"2024-03-15T07:07:25.538429Z","steps":["trace[965251420] 'process raft request'  (duration: 279.294576ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:07:25.538716Z","caller":"traceutil/trace.go:171","msg":"trace[264403920] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"284.874029ms","start":"2024-03-15T07:07:25.253833Z","end":"2024-03-15T07:07:25.538707Z","steps":["trace[264403920] 'process raft request'  (duration: 276.128917ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:07:44.993727Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T07:07:44.993814Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-294072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.216:2380"],"advertise-client-urls":["https://192.168.39.216:2379"]}
	{"level":"warn","ts":"2024-03-15T07:07:44.99397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T07:07:44.994154Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T07:07:45.110566Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.216:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T07:07:45.110689Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.216:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T07:07:45.110793Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"df37874d7ae18312","current-leader-member-id":"df37874d7ae18312"}
	{"level":"info","ts":"2024-03-15T07:07:45.114213Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.216:2380"}
	{"level":"info","ts":"2024-03-15T07:07:45.114458Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.216:2380"}
	{"level":"info","ts":"2024-03-15T07:07:45.114559Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-294072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.216:2380"],"advertise-client-urls":["https://192.168.39.216:2379"]}
	
	
	==> etcd [4881abf62da23e1c80ff79aa43cf362c73bd2d2e58620a08912cb4ea904a119c] <==
	{"level":"info","ts":"2024-03-15T07:08:01.013927Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T07:08:01.014294Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"df37874d7ae18312","initial-advertise-peer-urls":["https://192.168.39.216:2380"],"listen-peer-urls":["https://192.168.39.216:2380"],"advertise-client-urls":["https://192.168.39.216:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.216:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T07:08:01.014373Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T07:08:01.990407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T07:08:01.990568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:08:01.990622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 received MsgPreVoteResp from df37874d7ae18312 at term 2"}
	{"level":"info","ts":"2024-03-15T07:08:01.990644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T07:08:01.990652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 received MsgVoteResp from df37874d7ae18312 at term 3"}
	{"level":"info","ts":"2024-03-15T07:08:01.990664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df37874d7ae18312 became leader at term 3"}
	{"level":"info","ts":"2024-03-15T07:08:01.990675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: df37874d7ae18312 elected leader df37874d7ae18312 at term 3"}
	{"level":"info","ts":"2024-03-15T07:08:01.998031Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"df37874d7ae18312","local-member-attributes":"{Name:kubernetes-upgrade-294072 ClientURLs:[https://192.168.39.216:2379]}","request-path":"/0/members/df37874d7ae18312/attributes","cluster-id":"d4299564933997de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:08:01.998049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:08:01.99844Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:08:01.998479Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:08:01.998127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:08:02.001226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:08:02.001795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.216:2379"}
	{"level":"info","ts":"2024-03-15T07:08:24.175174Z","caller":"traceutil/trace.go:171","msg":"trace[694851724] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"316.029215ms","start":"2024-03-15T07:08:23.859114Z","end":"2024-03-15T07:08:24.175143Z","steps":["trace[694851724] 'process raft request'  (duration: 315.836748ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:08:24.176438Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:08:23.859099Z","time spent":"316.178377ms","remote":"127.0.0.1:39068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5555,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-294072\" mod_revision:421 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-294072\" value_size:5490 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-294072\" > >"}
	{"level":"warn","ts":"2024-03-15T07:08:24.957696Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.524693ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9444767778111774555 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:470 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1037 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T07:08:24.957806Z","caller":"traceutil/trace.go:171","msg":"trace[1732680868] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"1.064807752s","start":"2024-03-15T07:08:23.892984Z","end":"2024-03-15T07:08:24.957792Z","steps":["trace[1732680868] 'process raft request'  (duration: 677.883197ms)","trace[1732680868] 'compare'  (duration: 386.285599ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:08:24.957858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:08:23.892968Z","time spent":"1.064864227s","remote":"127.0.0.1:39040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1110,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:470 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1037 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-15T07:08:25.219933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.070043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4091"}
	{"level":"info","ts":"2024-03-15T07:08:25.220113Z","caller":"traceutil/trace.go:171","msg":"trace[1282735579] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:472; }","duration":"141.263329ms","start":"2024-03-15T07:08:25.078834Z","end":"2024-03-15T07:08:25.220098Z","steps":["trace[1282735579] 'range keys from in-memory index tree'  (duration: 140.759364ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:08:25.357746Z","caller":"traceutil/trace.go:171","msg":"trace[1037650656] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"102.374418ms","start":"2024-03-15T07:08:25.255358Z","end":"2024-03-15T07:08:25.357732Z","steps":["trace[1037650656] 'process raft request'  (duration: 102.275583ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:08:28 up 1 min,  0 users,  load average: 1.09, 0.38, 0.14
	Linux kubernetes-upgrade-294072 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93] <==
	I0315 07:08:04.706451       1 controller.go:178] quota evaluator worker shutdown
	I0315 07:08:04.706475       1 controller.go:178] quota evaluator worker shutdown
	I0315 07:08:04.706595       1 controller.go:178] quota evaluator worker shutdown
	I0315 07:08:04.706742       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0315 07:08:04.710687       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0315 07:08:05.109253       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:05.112301       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:06.110229       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:06.112899       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:07.110166       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:07.111760       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:08.109059       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:08.111625       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:09.110074       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:09.111742       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:10.109619       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:10.112330       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:11.110130       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:11.112604       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:12.108980       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:12.111871       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:13.109279       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:13.113169       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0315 07:08:14.110053       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0315 07:08:14.112693       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [ca29ab68bdad1a1f0294bca920b8a3d1cbbf6bc6ed5f422b835df901ae79ef71] <==
	I0315 07:08:20.662188       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 07:08:20.683723       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 07:08:20.707060       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0315 07:08:20.707475       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 07:08:20.708011       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0315 07:08:20.708075       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0315 07:08:20.709373       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 07:08:20.712447       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 07:08:20.719958       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 07:08:20.729213       1 aggregator.go:165] initial CRD sync complete...
	I0315 07:08:20.729322       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 07:08:20.729356       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 07:08:20.729387       1 cache.go:39] Caches are synced for autoregister controller
	I0315 07:08:21.205880       1 controller.go:624] quota admission added evaluator for: endpoints
	I0315 07:08:21.489234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 07:08:22.556409       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0315 07:08:22.572488       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0315 07:08:22.638256       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 07:08:22.686816       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 07:08:22.698850       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 07:08:24.959040       1 trace.go:236] Trace[979481275]: "Patch" accept:application/json,audit-id:9bff8a54-722f-4984-9e5d-92447d0f3925,client:127.0.0.1,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:kubectl/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:PATCH (15-Mar-2024 07:08:23.890) (total time: 1068ms):
	Trace[979481275]: ["GuaranteedUpdate etcd3" audit-id:9bff8a54-722f-4984-9e5d-92447d0f3925,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 1067ms (07:08:23.891)
	Trace[979481275]:  ---"Txn call completed" 1066ms (07:08:24.958)]
	Trace[979481275]: ---"Object stored in database" 1066ms (07:08:24.958)
	Trace[979481275]: [1.068111853s] [1.068111853s] END
	
	
	==> kube-controller-manager [747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb] <==
	I0315 07:07:56.929899       1 serving.go:380] Generated self-signed cert in-memory
	I0315 07:07:57.105636       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0315 07:07:57.105684       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:07:57.107350       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 07:07:57.107631       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 07:07:57.108552       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 07:07:57.108641       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	
	
	==> kube-controller-manager [9f0e2a82700527a75e25d14a1fbfd898d809d43bfddbb63f0d13fe1e541945a7] <==
	I0315 07:08:22.836570       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0315 07:08:22.836892       1 disruption.go:433] "Sending events to api server."
	I0315 07:08:22.836959       1 disruption.go:444] "Starting disruption controller"
	I0315 07:08:22.836971       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0315 07:08:22.842447       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0315 07:08:22.842779       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0315 07:08:22.842826       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0315 07:08:22.850981       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0315 07:08:22.851249       1 ttl_controller.go:124] "Starting TTL controller"
	I0315 07:08:22.851294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0315 07:08:22.858452       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0315 07:08:22.858818       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0315 07:08:22.858863       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0315 07:08:22.867382       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0315 07:08:22.881791       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0315 07:08:22.881836       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0315 07:08:22.882476       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0315 07:08:22.882815       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0315 07:08:22.882858       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0315 07:08:22.898719       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0315 07:08:22.899038       1 job_controller.go:224] "Starting job controller"
	I0315 07:08:22.901803       1 shared_informer.go:311] Waiting for caches to sync for job
	I0315 07:08:22.907860       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0315 07:08:22.908062       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0315 07:08:22.908094       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	
	
	==> kube-proxy [38e36bbbf9b248fc7566a86004bd8aef4605ef617c23d2a305891cea01b426fb] <==
	I0315 07:07:22.823897       1 server_others.go:72] "Using iptables proxy"
	I0315 07:07:22.857334       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.216"]
	I0315 07:07:22.922189       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0315 07:07:22.922236       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:07:22.922252       1 server_others.go:168] "Using iptables Proxier"
	I0315 07:07:22.963740       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:07:22.963955       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0315 07:07:22.963987       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:07:22.973443       1 config.go:188] "Starting service config controller"
	I0315 07:07:22.973486       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:07:22.973567       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:07:22.973573       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:07:22.973901       1 config.go:315] "Starting node config controller"
	I0315 07:07:22.973906       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:07:23.076282       1 shared_informer.go:318] Caches are synced for node config
	I0315 07:07:23.076400       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:07:23.076866       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f1f622f524a28b8c3240c5ee5005ae5a3813bd195e5bc7e3fbed4aae472110f4] <==
	I0315 07:08:05.671704       1 server_others.go:72] "Using iptables proxy"
	E0315 07:08:05.674687       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-294072\": dial tcp 192.168.39.216:8443: connect: connection refused"
	E0315 07:08:06.841088       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-294072\": dial tcp 192.168.39.216:8443: connect: connection refused"
	E0315 07:08:09.237096       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-294072\": dial tcp 192.168.39.216:8443: connect: connection refused"
	E0315 07:08:13.680235       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-294072\": dial tcp 192.168.39.216:8443: connect: connection refused"
	I0315 07:08:22.316312       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.216"]
	I0315 07:08:22.398567       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0315 07:08:22.398658       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:08:22.398681       1 server_others.go:168] "Using iptables Proxier"
	I0315 07:08:22.402550       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:08:22.402826       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0315 07:08:22.402871       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:08:22.404806       1 config.go:188] "Starting service config controller"
	I0315 07:08:22.404884       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:08:22.404922       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:08:22.404957       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:08:22.405829       1 config.go:315] "Starting node config controller"
	I0315 07:08:22.405875       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:08:22.505365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:08:22.505490       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:08:22.506103       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3] <==
	I0315 07:07:57.053190       1 serving.go:380] Generated self-signed cert in-memory
	W0315 07:08:04.265465       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:08:04.267105       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:08:04.269032       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:08:04.269177       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:08:04.354210       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0315 07:08:04.354415       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:08:04.364008       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 07:08:04.364072       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 07:08:04.364088       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:08:04.364101       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 07:08:04.379969       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0315 07:08:04.380125       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0315 07:08:04.380719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 07:08:04.380785       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0315 07:08:04.380904       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0315 07:08:04.389894       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb48275fd72bcf37d997bf5650a90a9638184101824e26b329c99a955b3423ee] <==
	I0315 07:08:18.586921       1 serving.go:380] Generated self-signed cert in-memory
	W0315 07:08:20.606414       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:08:20.606560       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:08:20.606578       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:08:20.606587       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:08:20.740317       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0315 07:08:20.740385       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:08:20.770114       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 07:08:20.781964       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 07:08:20.782009       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:08:20.782040       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 07:08:20.882820       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.112232    3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b796e087751848c80e9997e9710f857c-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-294072\" (UID: \"b796e087751848c80e9997e9710f857c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-294072"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.112440    3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b796e087751848c80e9997e9710f857c-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-294072\" (UID: \"b796e087751848c80e9997e9710f857c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-294072"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: E0315 07:08:17.308869    3151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-294072?timeout=10s\": dial tcp 192.168.39.216:8443: connect: connection refused" interval="800ms"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.358665    3151 scope.go:117] "RemoveContainer" containerID="01a4b7c35bb225585bcd971800eb3f61096705f953b9e595be4deadf768b3a93"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.360480    3151 scope.go:117] "RemoveContainer" containerID="747f26973dcde1c76fcc946c1819624cbf1939f559b123aed4cfc551f720c7cb"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.362074    3151 scope.go:117] "RemoveContainer" containerID="bb20decd4d50a778d59d808d4850c5576820db35bdfa32107c7eae6b72ab20d3"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:17.428038    3151 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-294072"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: E0315 07:08:17.433066    3151 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.216:8443: connect: connection refused" node="kubernetes-upgrade-294072"
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: W0315 07:08:17.576254    3151 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.216:8443: connect: connection refused
	Mar 15 07:08:17 kubernetes-upgrade-294072 kubelet[3151]: E0315 07:08:17.576361    3151 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.216:8443: connect: connection refused
	Mar 15 07:08:18 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:18.234459    3151 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-294072"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.684670    3151 apiserver.go:52] "Watching apiserver"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.713633    3151 topology_manager.go:215] "Topology Admit Handler" podUID="bd18d7a6-f304-4f3e-b62c-7f454837676b" podNamespace="kube-system" podName="storage-provisioner"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.713886    3151 topology_manager.go:215] "Topology Admit Handler" podUID="13184da1-665b-428a-bdfb-e288a01f6256" podNamespace="kube-system" podName="kube-proxy-n5khh"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.714000    3151 topology_manager.go:215] "Topology Admit Handler" podUID="fe25b594-7ac2-40fc-9e32-e780d6dc90cd" podNamespace="kube-system" podName="coredns-76f75df574-kjtpd"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.714094    3151 topology_manager.go:215] "Topology Admit Handler" podUID="338c1a51-a514-4256-86cb-14ec36eca10b" podNamespace="kube-system" podName="coredns-76f75df574-qsclc"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.754792    3151 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-294072"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.755026    3151 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-294072"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.774773    3151 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.776130    3151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.804097    3151 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.858555    3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd18d7a6-f304-4f3e-b62c-7f454837676b-tmp\") pod \"storage-provisioner\" (UID: \"bd18d7a6-f304-4f3e-b62c-7f454837676b\") " pod="kube-system/storage-provisioner"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.859221    3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13184da1-665b-428a-bdfb-e288a01f6256-lib-modules\") pod \"kube-proxy-n5khh\" (UID: \"13184da1-665b-428a-bdfb-e288a01f6256\") " pod="kube-system/kube-proxy-n5khh"
	Mar 15 07:08:20 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:20.859608    3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13184da1-665b-428a-bdfb-e288a01f6256-xtables-lock\") pod \"kube-proxy-n5khh\" (UID: \"13184da1-665b-428a-bdfb-e288a01f6256\") " pod="kube-system/kube-proxy-n5khh"
	Mar 15 07:08:21 kubernetes-upgrade-294072 kubelet[3151]: I0315 07:08:21.017166    3151 scope.go:117] "RemoveContainer" containerID="2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d"
	
	
	==> storage-provisioner [2802b82f25c1cddaf5d93329955a56cd64d54486867e8452a7c3fea38621393d] <==
	I0315 07:08:04.782573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0315 07:08:04.786303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5c3e9513342e5f56af6a005a8eee62f5f38b688356bca490222d2640c5ed93e2] <==
	I0315 07:08:21.168999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:08:21.188878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:08:21.189011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:08:21.221492       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:08:21.221744       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-294072_d610209d-c4a5-4a62-a72b-71a387b3ca20!
	I0315 07:08:21.223060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b876c95b-8e3c-4679-b973-135851c780b8", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-294072_d610209d-c4a5-4a62-a72b-71a387b3ca20 became leader
	I0315 07:08:21.322467       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-294072_d610209d-c4a5-4a62-a72b-71a387b3ca20!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:08:26.980671   54288 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18213-8825/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-294072 -n kubernetes-upgrade-294072
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-294072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-294072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-294072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-294072: (1.349031902s)
--- FAIL: TestKubernetesUpgrade (396.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (302.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m1.782252536s)

                                                
                                                
-- stdout --
	* [old-k8s-version-981420] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-981420" primary control-plane node in "old-k8s-version-981420" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:07:45.799034   53799 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:07:45.799171   53799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:07:45.799180   53799 out.go:304] Setting ErrFile to fd 2...
	I0315 07:07:45.799184   53799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:07:45.799463   53799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:07:45.800115   53799 out.go:298] Setting JSON to false
	I0315 07:07:45.801245   53799 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6562,"bootTime":1710479904,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:07:45.801311   53799 start.go:139] virtualization: kvm guest
	I0315 07:07:45.803849   53799 out.go:177] * [old-k8s-version-981420] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:07:45.805739   53799 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:07:45.807194   53799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:07:45.805795   53799 notify.go:220] Checking for updates...
	I0315 07:07:45.808705   53799 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:07:45.810076   53799 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:07:45.811356   53799 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:07:45.812724   53799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:07:45.814694   53799 config.go:182] Loaded profile config "cert-expiration-266938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:07:45.814812   53799 config.go:182] Loaded profile config "cert-options-559541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:07:45.814915   53799 config.go:182] Loaded profile config "kubernetes-upgrade-294072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:07:45.815044   53799 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:07:45.859425   53799 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:07:45.861051   53799 start.go:297] selected driver: kvm2
	I0315 07:07:45.861093   53799 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:07:45.861109   53799 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:07:45.861929   53799 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:07:45.862004   53799 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:07:45.880914   53799 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:07:45.880976   53799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:07:45.881280   53799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:07:45.881366   53799 cni.go:84] Creating CNI manager for ""
	I0315 07:07:45.881381   53799 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:07:45.881394   53799 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:07:45.881494   53799 start.go:340] cluster config:
	{Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:07:45.881630   53799 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:07:45.883596   53799 out.go:177] * Starting "old-k8s-version-981420" primary control-plane node in "old-k8s-version-981420" cluster
	I0315 07:07:45.884873   53799 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:07:45.884916   53799 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 07:07:45.884927   53799 cache.go:56] Caching tarball of preloaded images
	I0315 07:07:45.885016   53799 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:07:45.885040   53799 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 07:07:45.885169   53799 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:07:45.885192   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json: {Name:mk282299da90236b026435d5900111e6e36224d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:07:45.885379   53799 start.go:360] acquireMachinesLock for old-k8s-version-981420: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:08:17.093787   53799 start.go:364] duration metric: took 31.20836264s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:08:17.093852   53799 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:08:17.093993   53799 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:08:17.096045   53799 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 07:08:17.096298   53799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:08:17.096361   53799 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:08:17.116373   53799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0315 07:08:17.116921   53799 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:08:17.117560   53799 main.go:141] libmachine: Using API Version  1
	I0315 07:08:17.117584   53799 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:08:17.117944   53799 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:08:17.118112   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:08:17.118267   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:17.118419   53799 start.go:159] libmachine.API.Create for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:08:17.118449   53799 client.go:168] LocalClient.Create starting
	I0315 07:08:17.118485   53799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:08:17.118524   53799 main.go:141] libmachine: Decoding PEM data...
	I0315 07:08:17.118545   53799 main.go:141] libmachine: Parsing certificate...
	I0315 07:08:17.118612   53799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:08:17.118646   53799 main.go:141] libmachine: Decoding PEM data...
	I0315 07:08:17.118663   53799 main.go:141] libmachine: Parsing certificate...
	I0315 07:08:17.118692   53799 main.go:141] libmachine: Running pre-create checks...
	I0315 07:08:17.118705   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .PreCreateCheck
	I0315 07:08:17.119094   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:08:17.119511   53799 main.go:141] libmachine: Creating machine...
	I0315 07:08:17.119526   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .Create
	I0315 07:08:17.119685   53799 main.go:141] libmachine: (old-k8s-version-981420) Creating KVM machine...
	I0315 07:08:17.121210   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found existing default KVM network
	I0315 07:08:17.122508   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.122328   54096 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f4:5b:03} reservation:<nil>}
	I0315 07:08:17.123424   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.123323   54096 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b5:c7:f6} reservation:<nil>}
	I0315 07:08:17.124646   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.124546   54096 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308930}
	I0315 07:08:17.124671   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | created network xml: 
	I0315 07:08:17.124685   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | <network>
	I0315 07:08:17.124698   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   <name>mk-old-k8s-version-981420</name>
	I0315 07:08:17.124713   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   <dns enable='no'/>
	I0315 07:08:17.124725   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   
	I0315 07:08:17.124738   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0315 07:08:17.124752   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |     <dhcp>
	I0315 07:08:17.124768   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0315 07:08:17.124779   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |     </dhcp>
	I0315 07:08:17.124791   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   </ip>
	I0315 07:08:17.124812   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG |   
	I0315 07:08:17.124823   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | </network>
	I0315 07:08:17.124833   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | 
	I0315 07:08:17.130780   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | trying to create private KVM network mk-old-k8s-version-981420 192.168.61.0/24...
	I0315 07:08:17.210034   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | private KVM network mk-old-k8s-version-981420 192.168.61.0/24 created
	I0315 07:08:17.210092   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420 ...
	I0315 07:08:17.210113   53799 main.go:141] libmachine: (old-k8s-version-981420) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:08:17.210126   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.209998   54096 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:08:17.210170   53799 main.go:141] libmachine: (old-k8s-version-981420) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:08:17.457331   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.457133   54096 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa...
	I0315 07:08:17.746503   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.746364   54096 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/old-k8s-version-981420.rawdisk...
	I0315 07:08:17.746534   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Writing magic tar header
	I0315 07:08:17.746551   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Writing SSH key tar header
	I0315 07:08:17.746644   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:17.746543   54096 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420 ...
	I0315 07:08:17.746707   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420
	I0315 07:08:17.746727   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:08:17.746745   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420 (perms=drwx------)
	I0315 07:08:17.746756   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:08:17.746778   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:08:17.746788   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:08:17.746800   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:08:17.746809   53799 main.go:141] libmachine: (old-k8s-version-981420) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:08:17.746821   53799 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:08:17.746836   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:08:17.746846   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:08:17.746857   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:08:17.746865   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:08:17.746876   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Checking permissions on dir: /home
	I0315 07:08:17.746884   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Skipping /home - not owner
	I0315 07:08:17.748363   53799 main.go:141] libmachine: (old-k8s-version-981420) define libvirt domain using xml: 
	I0315 07:08:17.748382   53799 main.go:141] libmachine: (old-k8s-version-981420) <domain type='kvm'>
	I0315 07:08:17.748393   53799 main.go:141] libmachine: (old-k8s-version-981420)   <name>old-k8s-version-981420</name>
	I0315 07:08:17.748401   53799 main.go:141] libmachine: (old-k8s-version-981420)   <memory unit='MiB'>2200</memory>
	I0315 07:08:17.748410   53799 main.go:141] libmachine: (old-k8s-version-981420)   <vcpu>2</vcpu>
	I0315 07:08:17.748418   53799 main.go:141] libmachine: (old-k8s-version-981420)   <features>
	I0315 07:08:17.748427   53799 main.go:141] libmachine: (old-k8s-version-981420)     <acpi/>
	I0315 07:08:17.748435   53799 main.go:141] libmachine: (old-k8s-version-981420)     <apic/>
	I0315 07:08:17.748443   53799 main.go:141] libmachine: (old-k8s-version-981420)     <pae/>
	I0315 07:08:17.748451   53799 main.go:141] libmachine: (old-k8s-version-981420)     
	I0315 07:08:17.748460   53799 main.go:141] libmachine: (old-k8s-version-981420)   </features>
	I0315 07:08:17.748489   53799 main.go:141] libmachine: (old-k8s-version-981420)   <cpu mode='host-passthrough'>
	I0315 07:08:17.748497   53799 main.go:141] libmachine: (old-k8s-version-981420)   
	I0315 07:08:17.748503   53799 main.go:141] libmachine: (old-k8s-version-981420)   </cpu>
	I0315 07:08:17.748511   53799 main.go:141] libmachine: (old-k8s-version-981420)   <os>
	I0315 07:08:17.748518   53799 main.go:141] libmachine: (old-k8s-version-981420)     <type>hvm</type>
	I0315 07:08:17.748527   53799 main.go:141] libmachine: (old-k8s-version-981420)     <boot dev='cdrom'/>
	I0315 07:08:17.748547   53799 main.go:141] libmachine: (old-k8s-version-981420)     <boot dev='hd'/>
	I0315 07:08:17.748557   53799 main.go:141] libmachine: (old-k8s-version-981420)     <bootmenu enable='no'/>
	I0315 07:08:17.748564   53799 main.go:141] libmachine: (old-k8s-version-981420)   </os>
	I0315 07:08:17.748572   53799 main.go:141] libmachine: (old-k8s-version-981420)   <devices>
	I0315 07:08:17.748580   53799 main.go:141] libmachine: (old-k8s-version-981420)     <disk type='file' device='cdrom'>
	I0315 07:08:17.748594   53799 main.go:141] libmachine: (old-k8s-version-981420)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/boot2docker.iso'/>
	I0315 07:08:17.748604   53799 main.go:141] libmachine: (old-k8s-version-981420)       <target dev='hdc' bus='scsi'/>
	I0315 07:08:17.748616   53799 main.go:141] libmachine: (old-k8s-version-981420)       <readonly/>
	I0315 07:08:17.748623   53799 main.go:141] libmachine: (old-k8s-version-981420)     </disk>
	I0315 07:08:17.748632   53799 main.go:141] libmachine: (old-k8s-version-981420)     <disk type='file' device='disk'>
	I0315 07:08:17.748643   53799 main.go:141] libmachine: (old-k8s-version-981420)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:08:17.748661   53799 main.go:141] libmachine: (old-k8s-version-981420)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/old-k8s-version-981420.rawdisk'/>
	I0315 07:08:17.748670   53799 main.go:141] libmachine: (old-k8s-version-981420)       <target dev='hda' bus='virtio'/>
	I0315 07:08:17.748683   53799 main.go:141] libmachine: (old-k8s-version-981420)     </disk>
	I0315 07:08:17.748691   53799 main.go:141] libmachine: (old-k8s-version-981420)     <interface type='network'>
	I0315 07:08:17.748701   53799 main.go:141] libmachine: (old-k8s-version-981420)       <source network='mk-old-k8s-version-981420'/>
	I0315 07:08:17.748709   53799 main.go:141] libmachine: (old-k8s-version-981420)       <model type='virtio'/>
	I0315 07:08:17.748717   53799 main.go:141] libmachine: (old-k8s-version-981420)     </interface>
	I0315 07:08:17.748725   53799 main.go:141] libmachine: (old-k8s-version-981420)     <interface type='network'>
	I0315 07:08:17.748733   53799 main.go:141] libmachine: (old-k8s-version-981420)       <source network='default'/>
	I0315 07:08:17.748740   53799 main.go:141] libmachine: (old-k8s-version-981420)       <model type='virtio'/>
	I0315 07:08:17.748750   53799 main.go:141] libmachine: (old-k8s-version-981420)     </interface>
	I0315 07:08:17.748756   53799 main.go:141] libmachine: (old-k8s-version-981420)     <serial type='pty'>
	I0315 07:08:17.748765   53799 main.go:141] libmachine: (old-k8s-version-981420)       <target port='0'/>
	I0315 07:08:17.748771   53799 main.go:141] libmachine: (old-k8s-version-981420)     </serial>
	I0315 07:08:17.748778   53799 main.go:141] libmachine: (old-k8s-version-981420)     <console type='pty'>
	I0315 07:08:17.748784   53799 main.go:141] libmachine: (old-k8s-version-981420)       <target type='serial' port='0'/>
	I0315 07:08:17.748792   53799 main.go:141] libmachine: (old-k8s-version-981420)     </console>
	I0315 07:08:17.748799   53799 main.go:141] libmachine: (old-k8s-version-981420)     <rng model='virtio'>
	I0315 07:08:17.748807   53799 main.go:141] libmachine: (old-k8s-version-981420)       <backend model='random'>/dev/random</backend>
	I0315 07:08:17.748816   53799 main.go:141] libmachine: (old-k8s-version-981420)     </rng>
	I0315 07:08:17.748823   53799 main.go:141] libmachine: (old-k8s-version-981420)     
	I0315 07:08:17.748829   53799 main.go:141] libmachine: (old-k8s-version-981420)     
	I0315 07:08:17.748835   53799 main.go:141] libmachine: (old-k8s-version-981420)   </devices>
	I0315 07:08:17.748841   53799 main.go:141] libmachine: (old-k8s-version-981420) </domain>
	I0315 07:08:17.748850   53799 main.go:141] libmachine: (old-k8s-version-981420) 
	I0315 07:08:17.757716   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:14:d1:30 in network default
	I0315 07:08:17.758569   53799 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:08:17.758597   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:17.759626   53799 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:08:17.760057   53799 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:08:17.761008   53799 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:08:17.761999   53799 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:08:19.223128   53799 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:08:19.224280   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:19.225008   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:19.225211   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:19.225160   54096 retry.go:31] will retry after 242.87163ms: waiting for machine to come up
	I0315 07:08:19.469966   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:19.470830   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:19.470874   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:19.470778   54096 retry.go:31] will retry after 288.186717ms: waiting for machine to come up
	I0315 07:08:19.760295   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:19.760920   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:19.760983   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:19.760891   54096 retry.go:31] will retry after 452.735517ms: waiting for machine to come up
	I0315 07:08:20.215675   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:20.216249   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:20.216277   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:20.216228   54096 retry.go:31] will retry after 428.273548ms: waiting for machine to come up
	I0315 07:08:20.646878   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:20.647501   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:20.647534   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:20.647411   54096 retry.go:31] will retry after 641.165798ms: waiting for machine to come up
	I0315 07:08:21.290810   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:21.291338   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:21.291369   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:21.291305   54096 retry.go:31] will retry after 869.454623ms: waiting for machine to come up
	I0315 07:08:22.162654   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:22.163275   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:22.163306   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:22.163243   54096 retry.go:31] will retry after 971.738029ms: waiting for machine to come up
	I0315 07:08:23.136934   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:23.137527   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:23.137551   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:23.137470   54096 retry.go:31] will retry after 1.318799662s: waiting for machine to come up
	I0315 07:08:24.457769   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:24.458341   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:24.458371   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:24.458299   54096 retry.go:31] will retry after 1.712789026s: waiting for machine to come up
	I0315 07:08:26.174087   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:26.174601   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:26.174642   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:26.174559   54096 retry.go:31] will retry after 1.921652905s: waiting for machine to come up
	I0315 07:08:28.097849   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:28.098425   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:28.098457   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:28.098409   54096 retry.go:31] will retry after 1.902784498s: waiting for machine to come up
	I0315 07:08:30.277522   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:30.278123   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:30.278147   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:30.278062   54096 retry.go:31] will retry after 2.76547408s: waiting for machine to come up
	I0315 07:08:33.046849   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:33.047913   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:33.047979   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:33.047807   54096 retry.go:31] will retry after 3.543459483s: waiting for machine to come up
	I0315 07:08:36.594092   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:36.594636   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:08:36.594662   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:08:36.594583   54096 retry.go:31] will retry after 3.963971511s: waiting for machine to come up
	I0315 07:08:40.563282   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.563992   53799 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:08:40.564036   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.564057   53799 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:08:40.564460   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420
	I0315 07:08:40.642463   53799 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:08:40.642490   53799 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:08:40.642525   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:08:40.644887   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.645324   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:40.645354   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.645460   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:08:40.645489   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:08:40.645623   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:08:40.645643   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:08:40.645654   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:08:40.768828   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:08:40.769073   53799 main.go:141] libmachine: (old-k8s-version-981420) KVM machine creation complete!
	I0315 07:08:40.769373   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:08:40.769860   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:40.770063   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:40.770237   53799 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 07:08:40.770252   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:08:40.771520   53799 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 07:08:40.771536   53799 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 07:08:40.771544   53799 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 07:08:40.771551   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:40.774009   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.774480   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:40.774503   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.774645   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:40.774798   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.774977   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.775093   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:40.775211   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:40.775413   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:40.775426   53799 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 07:08:40.876246   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:08:40.876282   53799 main.go:141] libmachine: Detecting the provisioner...
	I0315 07:08:40.876293   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:40.879160   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.879505   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:40.879541   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.879774   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:40.880030   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.880223   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.880432   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:40.880631   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:40.880784   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:40.880796   53799 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 07:08:40.981555   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 07:08:40.981662   53799 main.go:141] libmachine: found compatible host: buildroot
	I0315 07:08:40.981673   53799 main.go:141] libmachine: Provisioning with buildroot...
	I0315 07:08:40.981682   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:08:40.981913   53799 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:08:40.981935   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:08:40.982201   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:40.984425   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.984874   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:40.984906   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:40.985090   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:40.985293   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.985472   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:40.985612   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:40.985807   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:40.985990   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:40.986003   53799 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:08:41.099200   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:08:41.099244   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.102256   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.102634   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.102662   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.102845   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:41.103038   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.103178   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.103334   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:41.103489   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:41.103733   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:41.103755   53799 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:08:41.215087   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:08:41.215114   53799 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:08:41.215149   53799 buildroot.go:174] setting up certificates
	I0315 07:08:41.215159   53799 provision.go:84] configureAuth start
	I0315 07:08:41.215169   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:08:41.215438   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:08:41.218389   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.218825   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.218852   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.219005   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.221080   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.221352   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.221377   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.221529   53799 provision.go:143] copyHostCerts
	I0315 07:08:41.221592   53799 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:08:41.221605   53799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:08:41.221670   53799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:08:41.221782   53799 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:08:41.221794   53799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:08:41.221823   53799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:08:41.221902   53799 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:08:41.221912   53799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:08:41.221939   53799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:08:41.221998   53799 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:08:41.370368   53799 provision.go:177] copyRemoteCerts
	I0315 07:08:41.370434   53799 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:08:41.370456   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.373256   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.373597   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.373627   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.373770   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:41.373935   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.374099   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:41.374195   53799 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:08:41.454593   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:08:41.480495   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:08:41.506856   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:08:41.532514   53799 provision.go:87] duration metric: took 317.344373ms to configureAuth
	I0315 07:08:41.532539   53799 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:08:41.532741   53799 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:08:41.532841   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.535345   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.535711   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.535743   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.535894   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:41.536061   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.536205   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.536309   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:41.536447   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:41.536665   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:41.536688   53799 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:08:41.797047   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:08:41.797074   53799 main.go:141] libmachine: Checking connection to Docker...
	I0315 07:08:41.797082   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetURL
	I0315 07:08:41.798424   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using libvirt version 6000000
	I0315 07:08:41.800745   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.801087   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.801119   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.801308   53799 main.go:141] libmachine: Docker is up and running!
	I0315 07:08:41.801327   53799 main.go:141] libmachine: Reticulating splines...
	I0315 07:08:41.801333   53799 client.go:171] duration metric: took 24.682876923s to LocalClient.Create
	I0315 07:08:41.801355   53799 start.go:167] duration metric: took 24.682937968s to libmachine.API.Create "old-k8s-version-981420"
	I0315 07:08:41.801365   53799 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:08:41.801377   53799 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:08:41.801392   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:41.801637   53799 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:08:41.801668   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.804073   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.804420   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.804448   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.804664   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:41.804870   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.805057   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:41.805207   53799 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:08:41.883174   53799 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:08:41.887531   53799 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:08:41.887554   53799 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:08:41.887614   53799 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:08:41.887681   53799 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:08:41.887765   53799 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:08:41.897407   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:08:41.923419   53799 start.go:296] duration metric: took 122.038026ms for postStartSetup
	I0315 07:08:41.923481   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:08:41.924036   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:08:41.926604   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.926936   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.926967   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.927219   53799 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:08:41.927446   53799 start.go:128] duration metric: took 24.833434421s to createHost
	I0315 07:08:41.927473   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:41.929465   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.929788   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:41.929818   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:41.929911   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:41.930089   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.930265   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:41.930409   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:41.930587   53799 main.go:141] libmachine: Using SSH client type: native
	I0315 07:08:41.930799   53799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:08:41.930824   53799 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 07:08:42.029333   53799 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710486522.004014898
	
	I0315 07:08:42.029359   53799 fix.go:216] guest clock: 1710486522.004014898
	I0315 07:08:42.029369   53799 fix.go:229] Guest: 2024-03-15 07:08:42.004014898 +0000 UTC Remote: 2024-03-15 07:08:41.927459505 +0000 UTC m=+56.189379379 (delta=76.555393ms)
	I0315 07:08:42.029424   53799 fix.go:200] guest clock delta is within tolerance: 76.555393ms
	I0315 07:08:42.029436   53799 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 24.935614219s
	I0315 07:08:42.029463   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:42.029735   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:08:42.032731   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.033082   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:42.033111   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.033311   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:42.033830   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:42.034041   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:08:42.034129   53799 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:08:42.034186   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:42.034272   53799 ssh_runner.go:195] Run: cat /version.json
	I0315 07:08:42.034297   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:08:42.037024   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.037209   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.037409   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:42.037443   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.037567   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:42.037588   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:42.037593   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:42.037791   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:42.037799   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:08:42.038001   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:42.038009   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:08:42.038114   53799 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:08:42.038219   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:08:42.038362   53799 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:08:42.123156   53799 ssh_runner.go:195] Run: systemctl --version
	I0315 07:08:42.158658   53799 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:08:42.326238   53799 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:08:42.334015   53799 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:08:42.334091   53799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:08:42.350727   53799 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:08:42.350766   53799 start.go:494] detecting cgroup driver to use...
	I0315 07:08:42.350822   53799 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:08:42.367610   53799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:08:42.383282   53799 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:08:42.383348   53799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:08:42.398922   53799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:08:42.414132   53799 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:08:42.529269   53799 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:08:42.695556   53799 docker.go:233] disabling docker service ...
	I0315 07:08:42.695633   53799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:08:42.711172   53799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:08:42.725482   53799 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:08:42.864362   53799 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:08:42.998447   53799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:08:43.014127   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:08:43.034221   53799 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:08:43.034283   53799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:08:43.045676   53799 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:08:43.045739   53799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:08:43.059717   53799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:08:43.071149   53799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:08:43.084164   53799 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:08:43.097448   53799 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:08:43.108531   53799 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:08:43.108584   53799 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:08:43.124398   53799 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:08:43.137842   53799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:08:43.263046   53799 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:08:43.421829   53799 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:08:43.421899   53799 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:08:43.427632   53799 start.go:562] Will wait 60s for crictl version
	I0315 07:08:43.427677   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:43.431976   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:08:43.477159   53799 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:08:43.477234   53799 ssh_runner.go:195] Run: crio --version
	I0315 07:08:43.508812   53799 ssh_runner.go:195] Run: crio --version
	I0315 07:08:43.542779   53799 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:08:43.543977   53799 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:08:43.546986   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:43.547389   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:08:34 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:08:43.547428   53799 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:08:43.547652   53799 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:08:43.552098   53799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:08:43.569551   53799 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:08:43.569688   53799 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:08:43.569748   53799 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:08:43.607688   53799 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:08:43.607758   53799 ssh_runner.go:195] Run: which lz4
	I0315 07:08:43.612124   53799 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0315 07:08:43.616608   53799 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:08:43.616643   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:08:45.464740   53799 crio.go:444] duration metric: took 1.852647565s to copy over tarball
	I0315 07:08:45.464818   53799 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:08:48.180369   53799 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.715517312s)
	I0315 07:08:48.180407   53799 crio.go:451] duration metric: took 2.715641107s to extract the tarball
	I0315 07:08:48.180420   53799 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:08:48.223160   53799 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:08:48.275715   53799 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:08:48.275741   53799 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:08:48.275810   53799 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:08:48.275926   53799 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:08:48.275946   53799 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:08:48.275829   53799 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:08:48.275853   53799 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:08:48.275861   53799 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:08:48.275870   53799 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:08:48.275873   53799 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:08:48.277544   53799 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:08:48.277592   53799 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:08:48.277553   53799 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:08:48.277550   53799 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:08:48.277556   53799 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:08:48.277556   53799 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:08:48.277576   53799 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:08:48.277908   53799 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:08:48.499588   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:08:48.538879   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:08:48.546141   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:08:48.548959   53799 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:08:48.549002   53799 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:08:48.549053   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.551557   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:08:48.561594   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:08:48.569514   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:08:48.638423   53799 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:08:48.638472   53799 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:08:48.638521   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.680445   53799 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:08:48.680507   53799 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:08:48.680549   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.680556   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:08:48.680678   53799 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:08:48.680710   53799 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:08:48.680748   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.688152   53799 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:08:48.688198   53799 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:08:48.688242   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.689054   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:08:48.693163   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:08:48.693278   53799 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:08:48.693312   53799 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:08:48.693351   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.749722   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:08:48.749788   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:08:48.749834   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:08:48.749873   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:08:48.808494   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:08:48.808519   53799 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:08:48.808556   53799 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:08:48.808580   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:08:48.808596   53799 ssh_runner.go:195] Run: which crictl
	I0315 07:08:48.851667   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:08:48.876408   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:08:48.876448   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:08:48.886759   53799 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:08:48.886895   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:08:48.929094   53799 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:08:49.212043   53799 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:08:49.356975   53799 cache_images.go:92] duration metric: took 1.081215273s to LoadCachedImages
	W0315 07:08:49.357069   53799 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0315 07:08:49.357088   53799 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:08:49.357216   53799 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:08:49.357307   53799 ssh_runner.go:195] Run: crio config
	I0315 07:08:49.414550   53799 cni.go:84] Creating CNI manager for ""
	I0315 07:08:49.414579   53799 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:08:49.414594   53799 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:08:49.414626   53799 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:08:49.414812   53799 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:08:49.414891   53799 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:08:49.426307   53799 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:08:49.426367   53799 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:08:49.436424   53799 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:08:49.454532   53799 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:08:49.471818   53799 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:08:49.489388   53799 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:08:49.493460   53799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:08:49.506712   53799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:08:49.645920   53799 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:08:49.676124   53799 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:08:49.676149   53799 certs.go:194] generating shared ca certs ...
	I0315 07:08:49.676165   53799 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:49.676331   53799 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:08:49.676397   53799 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:08:49.676411   53799 certs.go:256] generating profile certs ...
	I0315 07:08:49.676500   53799 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:08:49.676519   53799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt with IP's: []
	I0315 07:08:49.740766   53799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt ...
	I0315 07:08:49.740801   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: {Name:mk17f2420a6d1806d5a6619b1292e425142d229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:49.740993   53799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key ...
	I0315 07:08:49.741012   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key: {Name:mk8797ee66765b3daab8a3e1d1c5cefa54ce6d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:49.741116   53799 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:08:49.741139   53799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt.718ebbc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.243]
	I0315 07:08:49.844844   53799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt.718ebbc0 ...
	I0315 07:08:49.844873   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt.718ebbc0: {Name:mkbb1a2dc950663a18e89584e24f8a7c1682b555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:49.845032   53799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0 ...
	I0315 07:08:49.845046   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0: {Name:mkb33c9f88537d5d7f0914b6b8946b2b90089a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:49.845118   53799 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt.718ebbc0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt
	I0315 07:08:49.845217   53799 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key
	I0315 07:08:49.845298   53799 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:08:49.845327   53799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt with IP's: []
	I0315 07:08:50.030197   53799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt ...
	I0315 07:08:50.030237   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt: {Name:mkdd5c2a3e59b4530193cf588d60020cc3527238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:50.030427   53799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key ...
	I0315 07:08:50.030443   53799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key: {Name:mk0ef4dc35903a5577e981f357a09942f219499a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:08:50.030614   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:08:50.030652   53799 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:08:50.030662   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:08:50.030681   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:08:50.030702   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:08:50.030721   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:08:50.030756   53799 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:08:50.031571   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:08:50.063409   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:08:50.092850   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:08:50.122485   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:08:50.154696   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:08:50.181822   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:08:50.208868   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:08:50.234863   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:08:50.266190   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:08:50.292955   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:08:50.319097   53799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:08:50.346410   53799 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:08:50.366566   53799 ssh_runner.go:195] Run: openssl version
	I0315 07:08:50.372900   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:08:50.384661   53799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:08:50.389545   53799 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:08:50.389606   53799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:08:50.395743   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:08:50.408406   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:08:50.420185   53799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:08:50.425345   53799 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:08:50.425407   53799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:08:50.433311   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:08:50.448714   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:08:50.460660   53799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:08:50.466203   53799 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:08:50.466290   53799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:08:50.472448   53799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:08:50.484777   53799 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:08:50.489700   53799 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:08:50.489771   53799 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:08:50.489867   53799 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:08:50.489926   53799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:08:50.546276   53799 cri.go:89] found id: ""
	I0315 07:08:50.546353   53799 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:08:50.558467   53799 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:08:50.587763   53799 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:08:50.607451   53799 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:08:50.607482   53799 kubeadm.go:156] found existing configuration files:
	
	I0315 07:08:50.607544   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:08:50.617850   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:08:50.617921   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:08:50.632966   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:08:50.643820   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:08:50.643876   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:08:50.655353   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:08:50.664848   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:08:50.664912   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:08:50.674945   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:08:50.684815   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:08:50.684878   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:08:50.694821   53799 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:08:50.815737   53799 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:08:50.815820   53799 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:08:50.958246   53799 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:08:50.958439   53799 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:08:50.958583   53799 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:08:51.147686   53799 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:08:51.149624   53799 out.go:204]   - Generating certificates and keys ...
	I0315 07:08:51.149720   53799 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:08:51.149811   53799 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:08:51.351295   53799 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:08:51.543715   53799 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:08:51.761877   53799 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:08:52.065732   53799 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:08:52.281182   53799 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:08:52.281411   53799 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	I0315 07:08:52.473501   53799 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:08:52.473707   53799 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	I0315 07:08:52.957705   53799 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:08:53.071192   53799 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:08:53.340658   53799 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:08:53.340875   53799 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:08:53.466511   53799 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:08:53.614724   53799 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:08:53.752378   53799 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:08:53.839408   53799 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:08:53.862396   53799 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:08:53.863577   53799 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:08:53.863637   53799 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:08:53.996901   53799 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:08:53.999889   53799 out.go:204]   - Booting up control plane ...
	I0315 07:08:54.000039   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:08:54.003503   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:08:54.004338   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:08:54.008584   53799 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:08:54.012155   53799 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:09:34.008103   53799 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:09:34.008905   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:09:34.009302   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:09:39.010043   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:09:39.010218   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:09:49.011020   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:09:49.011268   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:10:09.012660   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:10:09.012877   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:10:49.012172   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:10:49.012478   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:10:49.012507   53799 kubeadm.go:309] 
	I0315 07:10:49.012579   53799 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:10:49.012645   53799 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:10:49.012660   53799 kubeadm.go:309] 
	I0315 07:10:49.012698   53799 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:10:49.012767   53799 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:10:49.012901   53799 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:10:49.012910   53799 kubeadm.go:309] 
	I0315 07:10:49.013034   53799 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:10:49.013089   53799 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:10:49.013122   53799 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:10:49.013132   53799 kubeadm.go:309] 
	I0315 07:10:49.013287   53799 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:10:49.013423   53799 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:10:49.013445   53799 kubeadm.go:309] 
	I0315 07:10:49.013566   53799 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:10:49.013672   53799 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:10:49.013745   53799 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:10:49.013856   53799 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:10:49.013871   53799 kubeadm.go:309] 
	I0315 07:10:49.014409   53799 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:10:49.014512   53799 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:10:49.014611   53799 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0315 07:10:49.014782   53799 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-981420] and IPs [192.168.61.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:10:49.014839   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:10:50.223270   53799 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.20840496s)
	I0315 07:10:50.223367   53799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:10:50.237694   53799 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:10:50.248386   53799 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:10:50.248406   53799 kubeadm.go:156] found existing configuration files:
	
	I0315 07:10:50.248457   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:10:50.258826   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:10:50.258906   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:10:50.268891   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:10:50.278933   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:10:50.279003   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:10:50.291616   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:10:50.301412   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:10:50.301468   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:10:50.311790   53799 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:10:50.322014   53799 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:10:50.322078   53799 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:10:50.333281   53799 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:10:50.407649   53799 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:10:50.407786   53799 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:10:50.570668   53799 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:10:50.570792   53799 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:10:50.570929   53799 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:10:50.762010   53799 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:10:50.763863   53799 out.go:204]   - Generating certificates and keys ...
	I0315 07:10:50.763973   53799 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:10:50.764056   53799 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:10:50.764168   53799 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:10:50.764267   53799 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:10:50.764377   53799 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:10:50.764508   53799 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:10:50.764878   53799 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:10:50.765292   53799 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:10:50.765711   53799 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:10:50.766524   53799 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:10:50.766755   53799 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:10:50.766831   53799 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:10:50.911488   53799 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:10:51.248858   53799 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:10:51.496782   53799 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:10:51.711298   53799 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:10:51.727297   53799 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:10:51.728488   53799 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:10:51.728541   53799 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:10:51.890170   53799 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:10:51.891987   53799 out.go:204]   - Booting up control plane ...
	I0315 07:10:51.892113   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:10:51.895896   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:10:51.897351   53799 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:10:51.900721   53799 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:10:51.902877   53799 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:11:31.904378   53799 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:11:31.904826   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:11:31.905028   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:11:36.905340   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:11:36.905526   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:11:46.906002   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:11:46.906302   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:12:06.907362   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:12:06.907598   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:12:46.907517   53799 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:12:46.907747   53799 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:12:46.907763   53799 kubeadm.go:309] 
	I0315 07:12:46.907820   53799 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:12:46.907927   53799 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:12:46.907939   53799 kubeadm.go:309] 
	I0315 07:12:46.907969   53799 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:12:46.908003   53799 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:12:46.908140   53799 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:12:46.908152   53799 kubeadm.go:309] 
	I0315 07:12:46.908286   53799 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:12:46.908359   53799 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:12:46.908417   53799 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:12:46.908435   53799 kubeadm.go:309] 
	I0315 07:12:46.908607   53799 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:12:46.908746   53799 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:12:46.908764   53799 kubeadm.go:309] 
	I0315 07:12:46.908914   53799 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:12:46.909035   53799 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:12:46.909159   53799 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:12:46.909280   53799 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:12:46.909290   53799 kubeadm.go:309] 
	I0315 07:12:46.911137   53799 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:12:46.911263   53799 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:12:46.911352   53799 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:12:46.911416   53799 kubeadm.go:393] duration metric: took 3m56.421648387s to StartCluster
	I0315 07:12:46.911454   53799 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:12:46.911505   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:12:46.958822   53799 cri.go:89] found id: ""
	I0315 07:12:46.958851   53799 logs.go:276] 0 containers: []
	W0315 07:12:46.958862   53799 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:12:46.958870   53799 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:12:46.958938   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:12:46.996444   53799 cri.go:89] found id: ""
	I0315 07:12:46.996481   53799 logs.go:276] 0 containers: []
	W0315 07:12:46.996492   53799 logs.go:278] No container was found matching "etcd"
	I0315 07:12:46.996499   53799 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:12:46.996565   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:12:47.037357   53799 cri.go:89] found id: ""
	I0315 07:12:47.037385   53799 logs.go:276] 0 containers: []
	W0315 07:12:47.037396   53799 logs.go:278] No container was found matching "coredns"
	I0315 07:12:47.037404   53799 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:12:47.037465   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:12:47.075676   53799 cri.go:89] found id: ""
	I0315 07:12:47.075707   53799 logs.go:276] 0 containers: []
	W0315 07:12:47.075717   53799 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:12:47.075725   53799 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:12:47.075786   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:12:47.115678   53799 cri.go:89] found id: ""
	I0315 07:12:47.115699   53799 logs.go:276] 0 containers: []
	W0315 07:12:47.115706   53799 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:12:47.115713   53799 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:12:47.115780   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:12:47.150895   53799 cri.go:89] found id: ""
	I0315 07:12:47.150928   53799 logs.go:276] 0 containers: []
	W0315 07:12:47.150938   53799 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:12:47.150946   53799 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:12:47.151008   53799 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:12:47.185120   53799 cri.go:89] found id: ""
	I0315 07:12:47.185145   53799 logs.go:276] 0 containers: []
	W0315 07:12:47.185152   53799 logs.go:278] No container was found matching "kindnet"
	I0315 07:12:47.185161   53799 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:12:47.185174   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:12:47.285776   53799 logs.go:123] Gathering logs for container status ...
	I0315 07:12:47.285807   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:12:47.328627   53799 logs.go:123] Gathering logs for kubelet ...
	I0315 07:12:47.328656   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:12:47.379924   53799 logs.go:123] Gathering logs for dmesg ...
	I0315 07:12:47.379954   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:12:47.394828   53799 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:12:47.394855   53799 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:12:47.503159   53799 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:12:47.503222   53799 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:12:47.503265   53799 out.go:239] * 
	* 
	W0315 07:12:47.503325   53799 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:12:47.503359   53799 out.go:239] * 
	* 
	W0315 07:12:47.504198   53799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:12:47.507674   53799 out.go:177] 
	W0315 07:12:47.508847   53799 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:12:47.508913   53799 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:12:47.508938   53799 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:12:47.510514   53799 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 6 (256.319977ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:12:47.813456   56342 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-981420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (302.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-709708 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-709708 --alsologtostderr -v=3: exit status 82 (2m0.719919219s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-709708"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:10:34.049919   55512 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:10:34.050051   55512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:34.050061   55512 out.go:304] Setting ErrFile to fd 2...
	I0315 07:10:34.050067   55512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:34.050291   55512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:10:34.050582   55512 out.go:298] Setting JSON to false
	I0315 07:10:34.050673   55512 mustload.go:65] Loading cluster: embed-certs-709708
	I0315 07:10:34.051044   55512 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:10:34.051125   55512 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:10:34.051342   55512 mustload.go:65] Loading cluster: embed-certs-709708
	I0315 07:10:34.051456   55512 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:10:34.051480   55512 stop.go:39] StopHost: embed-certs-709708
	I0315 07:10:34.051852   55512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:10:34.051911   55512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:10:34.067017   55512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0315 07:10:34.067534   55512 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:10:34.068197   55512 main.go:141] libmachine: Using API Version  1
	I0315 07:10:34.068226   55512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:10:34.068613   55512 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:10:34.071182   55512 out.go:177] * Stopping node "embed-certs-709708"  ...
	I0315 07:10:34.072550   55512 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 07:10:34.072601   55512 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:10:34.072935   55512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 07:10:34.072967   55512 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:10:34.076301   55512 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:10:34.076749   55512 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:10:34.076785   55512 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:10:34.076962   55512 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:10:34.077165   55512 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:10:34.077363   55512 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:10:34.077525   55512 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:10:34.200935   55512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 07:10:34.263341   55512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 07:10:34.319641   55512 main.go:141] libmachine: Stopping "embed-certs-709708"...
	I0315 07:10:34.319667   55512 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:10:34.321220   55512 main.go:141] libmachine: (embed-certs-709708) Calling .Stop
	I0315 07:10:34.324685   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 0/120
	I0315 07:10:35.326996   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 1/120
	I0315 07:10:36.328443   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 2/120
	I0315 07:10:37.329890   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 3/120
	I0315 07:10:38.331613   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 4/120
	I0315 07:10:39.334149   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 5/120
	I0315 07:10:40.336201   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 6/120
	I0315 07:10:41.337809   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 7/120
	I0315 07:10:42.339273   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 8/120
	I0315 07:10:43.341099   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 9/120
	I0315 07:10:44.343432   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 10/120
	I0315 07:10:45.344737   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 11/120
	I0315 07:10:46.346236   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 12/120
	I0315 07:10:47.347840   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 13/120
	I0315 07:10:48.349256   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 14/120
	I0315 07:10:49.350987   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 15/120
	I0315 07:10:50.352844   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 16/120
	I0315 07:10:51.355049   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 17/120
	I0315 07:10:52.356980   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 18/120
	I0315 07:10:53.358654   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 19/120
	I0315 07:10:54.360624   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 20/120
	I0315 07:10:55.361964   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 21/120
	I0315 07:10:56.363241   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 22/120
	I0315 07:10:57.364934   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 23/120
	I0315 07:10:58.366311   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 24/120
	I0315 07:10:59.368289   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 25/120
	I0315 07:11:00.369866   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 26/120
	I0315 07:11:01.371463   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 27/120
	I0315 07:11:02.372990   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 28/120
	I0315 07:11:03.374841   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 29/120
	I0315 07:11:04.377421   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 30/120
	I0315 07:11:05.379009   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 31/120
	I0315 07:11:06.380698   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 32/120
	I0315 07:11:07.382109   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 33/120
	I0315 07:11:08.383613   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 34/120
	I0315 07:11:09.385759   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 35/120
	I0315 07:11:10.387297   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 36/120
	I0315 07:11:11.389301   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 37/120
	I0315 07:11:12.391668   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 38/120
	I0315 07:11:13.393258   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 39/120
	I0315 07:11:14.394884   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 40/120
	I0315 07:11:15.396406   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 41/120
	I0315 07:11:16.397959   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 42/120
	I0315 07:11:17.399478   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 43/120
	I0315 07:11:18.401181   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 44/120
	I0315 07:11:19.403103   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 45/120
	I0315 07:11:20.404768   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 46/120
	I0315 07:11:21.407031   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 47/120
	I0315 07:11:22.408580   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 48/120
	I0315 07:11:23.410061   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 49/120
	I0315 07:11:24.412018   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 50/120
	I0315 07:11:25.413430   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 51/120
	I0315 07:11:26.415096   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 52/120
	I0315 07:11:27.416815   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 53/120
	I0315 07:11:28.418137   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 54/120
	I0315 07:11:29.420335   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 55/120
	I0315 07:11:30.422032   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 56/120
	I0315 07:11:31.423491   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 57/120
	I0315 07:11:32.424928   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 58/120
	I0315 07:11:33.426455   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 59/120
	I0315 07:11:34.428799   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 60/120
	I0315 07:11:35.430159   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 61/120
	I0315 07:11:36.431613   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 62/120
	I0315 07:11:37.432877   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 63/120
	I0315 07:11:38.435204   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 64/120
	I0315 07:11:39.437334   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 65/120
	I0315 07:11:40.438623   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 66/120
	I0315 07:11:41.439840   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 67/120
	I0315 07:11:42.441470   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 68/120
	I0315 07:11:43.443334   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 69/120
	I0315 07:11:44.444861   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 70/120
	I0315 07:11:45.612260   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 71/120
	I0315 07:11:46.614431   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 72/120
	I0315 07:11:47.616686   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 73/120
	I0315 07:11:48.618547   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 74/120
	I0315 07:11:49.620447   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 75/120
	I0315 07:11:50.621827   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 76/120
	I0315 07:11:51.623302   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 77/120
	I0315 07:11:52.625353   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 78/120
	I0315 07:11:53.627400   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 79/120
	I0315 07:11:54.629226   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 80/120
	I0315 07:11:55.631410   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 81/120
	I0315 07:11:56.633616   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 82/120
	I0315 07:11:57.635449   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 83/120
	I0315 07:11:58.636775   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 84/120
	I0315 07:11:59.638590   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 85/120
	I0315 07:12:00.640104   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 86/120
	I0315 07:12:01.641783   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 87/120
	I0315 07:12:02.643365   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 88/120
	I0315 07:12:03.644951   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 89/120
	I0315 07:12:04.647204   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 90/120
	I0315 07:12:05.648409   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 91/120
	I0315 07:12:06.650169   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 92/120
	I0315 07:12:07.652200   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 93/120
	I0315 07:12:08.653628   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 94/120
	I0315 07:12:09.655780   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 95/120
	I0315 07:12:10.657540   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 96/120
	I0315 07:12:11.659338   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 97/120
	I0315 07:12:12.660989   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 98/120
	I0315 07:12:13.663438   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 99/120
	I0315 07:12:14.665493   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 100/120
	I0315 07:12:15.666904   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 101/120
	I0315 07:12:16.668812   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 102/120
	I0315 07:12:17.670172   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 103/120
	I0315 07:12:18.671859   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 104/120
	I0315 07:12:19.673485   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 105/120
	I0315 07:12:20.675607   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 106/120
	I0315 07:12:21.677150   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 107/120
	I0315 07:12:22.679465   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 108/120
	I0315 07:12:23.681722   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 109/120
	I0315 07:12:24.683738   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 110/120
	I0315 07:12:25.685338   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 111/120
	I0315 07:12:26.687191   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 112/120
	I0315 07:12:27.689339   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 113/120
	I0315 07:12:28.691266   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 114/120
	I0315 07:12:29.693486   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 115/120
	I0315 07:12:30.695769   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 116/120
	I0315 07:12:31.697271   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 117/120
	I0315 07:12:32.699408   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 118/120
	I0315 07:12:33.700852   55512 main.go:141] libmachine: (embed-certs-709708) Waiting for machine to stop 119/120
	I0315 07:12:34.701905   55512 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 07:12:34.701966   55512 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 07:12:34.703952   55512 out.go:177] 
	W0315 07:12:34.705496   55512 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 07:12:34.705516   55512 out.go:239] * 
	* 
	W0315 07:12:34.708046   55512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:12:34.709436   55512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-709708 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708: exit status 3 (18.445114621s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:12:53.156798   56275 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0315 07:12:53.156819   56275 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-709708" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-128870 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-128870 --alsologtostderr -v=3: exit status 82 (2m0.773820356s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-128870"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:10:54.023665   55653 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:10:54.023812   55653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:54.023818   55653 out.go:304] Setting ErrFile to fd 2...
	I0315 07:10:54.023823   55653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:54.024378   55653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:10:54.024817   55653 out.go:298] Setting JSON to false
	I0315 07:10:54.024913   55653 mustload.go:65] Loading cluster: default-k8s-diff-port-128870
	I0315 07:10:54.025254   55653 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:10:54.025342   55653 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:10:54.025531   55653 mustload.go:65] Loading cluster: default-k8s-diff-port-128870
	I0315 07:10:54.025659   55653 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:10:54.025710   55653 stop.go:39] StopHost: default-k8s-diff-port-128870
	I0315 07:10:54.026093   55653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:10:54.026150   55653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:10:54.041311   55653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0315 07:10:54.041741   55653 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:10:54.042332   55653 main.go:141] libmachine: Using API Version  1
	I0315 07:10:54.042354   55653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:10:54.042740   55653 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:10:54.045019   55653 out.go:177] * Stopping node "default-k8s-diff-port-128870"  ...
	I0315 07:10:54.046756   55653 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 07:10:54.046782   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:10:54.047129   55653 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 07:10:54.047152   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:10:54.050149   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:10:54.050513   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:09:21 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:10:54.050539   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:10:54.050732   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:10:54.050934   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:10:54.051095   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:10:54.051282   55653 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:10:54.171262   55653 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 07:10:54.225045   55653 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 07:10:54.299754   55653 main.go:141] libmachine: Stopping "default-k8s-diff-port-128870"...
	I0315 07:10:54.299781   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:10:54.301447   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Stop
	I0315 07:10:54.304846   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 0/120
	I0315 07:10:55.306192   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 1/120
	I0315 07:10:56.307568   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 2/120
	I0315 07:10:57.308991   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 3/120
	I0315 07:10:58.311097   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 4/120
	I0315 07:10:59.313200   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 5/120
	I0315 07:11:00.314987   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 6/120
	I0315 07:11:01.316446   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 7/120
	I0315 07:11:02.317977   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 8/120
	I0315 07:11:03.319769   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 9/120
	I0315 07:11:04.321203   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 10/120
	I0315 07:11:05.322720   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 11/120
	I0315 07:11:06.324162   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 12/120
	I0315 07:11:07.325520   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 13/120
	I0315 07:11:08.327334   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 14/120
	I0315 07:11:09.329637   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 15/120
	I0315 07:11:10.331330   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 16/120
	I0315 07:11:11.333039   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 17/120
	I0315 07:11:12.335025   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 18/120
	I0315 07:11:13.336927   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 19/120
	I0315 07:11:14.339299   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 20/120
	I0315 07:11:15.341377   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 21/120
	I0315 07:11:16.343216   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 22/120
	I0315 07:11:17.344549   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 23/120
	I0315 07:11:18.345910   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 24/120
	I0315 07:11:19.347957   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 25/120
	I0315 07:11:20.350431   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 26/120
	I0315 07:11:21.351823   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 27/120
	I0315 07:11:22.353386   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 28/120
	I0315 07:11:23.354910   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 29/120
	I0315 07:11:24.357104   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 30/120
	I0315 07:11:25.358815   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 31/120
	I0315 07:11:26.360416   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 32/120
	I0315 07:11:27.361892   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 33/120
	I0315 07:11:28.363631   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 34/120
	I0315 07:11:29.365791   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 35/120
	I0315 07:11:30.367408   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 36/120
	I0315 07:11:31.368868   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 37/120
	I0315 07:11:32.370377   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 38/120
	I0315 07:11:33.371947   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 39/120
	I0315 07:11:34.374488   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 40/120
	I0315 07:11:35.375957   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 41/120
	I0315 07:11:36.377462   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 42/120
	I0315 07:11:37.379233   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 43/120
	I0315 07:11:38.380774   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 44/120
	I0315 07:11:39.382513   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 45/120
	I0315 07:11:40.384004   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 46/120
	I0315 07:11:41.385585   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 47/120
	I0315 07:11:42.387661   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 48/120
	I0315 07:11:43.388899   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 49/120
	I0315 07:11:44.391358   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 50/120
	I0315 07:11:45.612500   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 51/120
	I0315 07:11:46.614267   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 52/120
	I0315 07:11:47.616057   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 53/120
	I0315 07:11:48.617699   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 54/120
	I0315 07:11:49.619846   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 55/120
	I0315 07:11:50.621556   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 56/120
	I0315 07:11:51.623064   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 57/120
	I0315 07:11:52.624838   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 58/120
	I0315 07:11:53.626791   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 59/120
	I0315 07:11:54.628879   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 60/120
	I0315 07:11:55.631162   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 61/120
	I0315 07:11:56.632905   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 62/120
	I0315 07:11:57.635118   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 63/120
	I0315 07:11:58.636451   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 64/120
	I0315 07:11:59.638472   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 65/120
	I0315 07:12:00.640248   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 66/120
	I0315 07:12:01.642397   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 67/120
	I0315 07:12:02.643642   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 68/120
	I0315 07:12:03.645292   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 69/120
	I0315 07:12:04.647449   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 70/120
	I0315 07:12:05.648798   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 71/120
	I0315 07:12:06.650354   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 72/120
	I0315 07:12:07.651924   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 73/120
	I0315 07:12:08.653389   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 74/120
	I0315 07:12:09.655465   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 75/120
	I0315 07:12:10.657095   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 76/120
	I0315 07:12:11.659081   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 77/120
	I0315 07:12:12.660692   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 78/120
	I0315 07:12:13.663567   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 79/120
	I0315 07:12:14.665638   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 80/120
	I0315 07:12:15.667891   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 81/120
	I0315 07:12:16.669298   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 82/120
	I0315 07:12:17.671077   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 83/120
	I0315 07:12:18.672414   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 84/120
	I0315 07:12:19.674042   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 85/120
	I0315 07:12:20.675916   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 86/120
	I0315 07:12:21.677748   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 87/120
	I0315 07:12:22.680023   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 88/120
	I0315 07:12:23.681554   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 89/120
	I0315 07:12:24.683970   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 90/120
	I0315 07:12:25.685606   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 91/120
	I0315 07:12:26.686963   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 92/120
	I0315 07:12:27.688678   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 93/120
	I0315 07:12:28.691082   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 94/120
	I0315 07:12:29.693098   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 95/120
	I0315 07:12:30.695297   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 96/120
	I0315 07:12:31.696888   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 97/120
	I0315 07:12:32.699232   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 98/120
	I0315 07:12:33.701139   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 99/120
	I0315 07:12:34.703221   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 100/120
	I0315 07:12:35.704771   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 101/120
	I0315 07:12:36.707164   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 102/120
	I0315 07:12:37.708670   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 103/120
	I0315 07:12:38.710214   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 104/120
	I0315 07:12:39.712598   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 105/120
	I0315 07:12:40.714211   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 106/120
	I0315 07:12:41.715968   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 107/120
	I0315 07:12:42.717486   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 108/120
	I0315 07:12:43.718879   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 109/120
	I0315 07:12:44.721116   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 110/120
	I0315 07:12:45.723020   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 111/120
	I0315 07:12:46.724308   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 112/120
	I0315 07:12:47.725861   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 113/120
	I0315 07:12:48.727334   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 114/120
	I0315 07:12:49.729307   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 115/120
	I0315 07:12:50.730593   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 116/120
	I0315 07:12:51.731914   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 117/120
	I0315 07:12:52.733405   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 118/120
	I0315 07:12:53.734857   55653 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for machine to stop 119/120
	I0315 07:12:54.735443   55653 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 07:12:54.735523   55653 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 07:12:54.737357   55653 out.go:177] 
	W0315 07:12:54.738465   55653 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 07:12:54.738485   55653 out.go:239] * 
	* 
	W0315 07:12:54.741428   55653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:12:54.742662   55653 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-128870 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870: exit status 3 (18.637223913s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:13:13.380935   56513 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host
	E0315 07:13:13.380961   56513 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128870" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-981420 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-981420 create -f testdata/busybox.yaml: exit status 1 (42.193196ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-981420" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-981420 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 6 (229.144862ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:12:48.085976   56382 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-981420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 6 (232.913496ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:12:48.314727   56413 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-981420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-981420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-981420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.190372822s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-981420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-981420 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-981420 describe deploy/metrics-server -n kube-system: exit status 1 (44.232399ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-981420" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-981420 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 6 (232.952027ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:14:25.785713   57165 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-981420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708: exit status 3 (3.167909014s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:12:56.324859   56483 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0315 07:12:56.324877   56483 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153033417s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708: exit status 3 (3.062873822s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:13:05.540844   56613 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0315 07:13:05.540865   56613 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-709708" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870: exit status 3 (3.168287833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:13:16.548885   56706 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host
	E0315 07:13:16.548911   56706 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-128870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-128870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152773264s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-128870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870: exit status 3 (3.062267114s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:13:25.764915   56777 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host
	E0315 07:13:25.764939   56777 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128870" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-184055 --alsologtostderr -v=3
E0315 07:14:21.071494   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-184055 --alsologtostderr -v=3: exit status 82 (2m0.533337209s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-184055"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:13:51.346179   57003 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:13:51.346658   57003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:13:51.346676   57003 out.go:304] Setting ErrFile to fd 2...
	I0315 07:13:51.346684   57003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:13:51.347145   57003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:13:51.347909   57003 out.go:298] Setting JSON to false
	I0315 07:13:51.348015   57003 mustload.go:65] Loading cluster: no-preload-184055
	I0315 07:13:51.348380   57003 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:13:51.348451   57003 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:13:51.348651   57003 mustload.go:65] Loading cluster: no-preload-184055
	I0315 07:13:51.348760   57003 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:13:51.348793   57003 stop.go:39] StopHost: no-preload-184055
	I0315 07:13:51.349167   57003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:13:51.349214   57003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:13:51.364418   57003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0315 07:13:51.364891   57003 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:13:51.365481   57003 main.go:141] libmachine: Using API Version  1
	I0315 07:13:51.365534   57003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:13:51.365948   57003 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:13:51.368625   57003 out.go:177] * Stopping node "no-preload-184055"  ...
	I0315 07:13:51.370136   57003 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 07:13:51.370172   57003 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:13:51.370392   57003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 07:13:51.370410   57003 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:13:51.373287   57003 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:13:51.373684   57003 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:13:51.373714   57003 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:13:51.373818   57003 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:13:51.373975   57003 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:13:51.374144   57003 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:13:51.374273   57003 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:13:51.488246   57003 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 07:13:51.544170   57003 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 07:13:51.609023   57003 main.go:141] libmachine: Stopping "no-preload-184055"...
	I0315 07:13:51.609057   57003 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:13:51.610706   57003 main.go:141] libmachine: (no-preload-184055) Calling .Stop
	I0315 07:13:51.614649   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 0/120
	I0315 07:13:52.615823   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 1/120
	I0315 07:13:53.617176   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 2/120
	I0315 07:13:54.618405   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 3/120
	I0315 07:13:55.619722   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 4/120
	I0315 07:13:56.622180   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 5/120
	I0315 07:13:57.623751   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 6/120
	I0315 07:13:58.625368   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 7/120
	I0315 07:13:59.626916   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 8/120
	I0315 07:14:00.628428   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 9/120
	I0315 07:14:01.629938   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 10/120
	I0315 07:14:02.631382   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 11/120
	I0315 07:14:03.633372   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 12/120
	I0315 07:14:04.634948   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 13/120
	I0315 07:14:05.636828   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 14/120
	I0315 07:14:06.639474   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 15/120
	I0315 07:14:07.641269   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 16/120
	I0315 07:14:08.643497   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 17/120
	I0315 07:14:09.644994   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 18/120
	I0315 07:14:10.646792   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 19/120
	I0315 07:14:11.648232   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 20/120
	I0315 07:14:12.649657   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 21/120
	I0315 07:14:13.651204   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 22/120
	I0315 07:14:14.652898   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 23/120
	I0315 07:14:15.654360   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 24/120
	I0315 07:14:16.656840   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 25/120
	I0315 07:14:17.658293   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 26/120
	I0315 07:14:18.659931   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 27/120
	I0315 07:14:19.661329   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 28/120
	I0315 07:14:20.663776   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 29/120
	I0315 07:14:21.665152   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 30/120
	I0315 07:14:22.666579   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 31/120
	I0315 07:14:23.668588   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 32/120
	I0315 07:14:24.670344   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 33/120
	I0315 07:14:25.671381   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 34/120
	I0315 07:14:26.673652   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 35/120
	I0315 07:14:27.675274   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 36/120
	I0315 07:14:28.676804   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 37/120
	I0315 07:14:29.678421   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 38/120
	I0315 07:14:30.680099   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 39/120
	I0315 07:14:31.681612   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 40/120
	I0315 07:14:32.683235   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 41/120
	I0315 07:14:33.684981   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 42/120
	I0315 07:14:34.686537   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 43/120
	I0315 07:14:35.687883   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 44/120
	I0315 07:14:36.690079   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 45/120
	I0315 07:14:37.691574   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 46/120
	I0315 07:14:38.693074   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 47/120
	I0315 07:14:39.694965   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 48/120
	I0315 07:14:40.696897   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 49/120
	I0315 07:14:41.698612   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 50/120
	I0315 07:14:42.700300   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 51/120
	I0315 07:14:43.702110   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 52/120
	I0315 07:14:44.703635   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 53/120
	I0315 07:14:45.705283   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 54/120
	I0315 07:14:46.707617   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 55/120
	I0315 07:14:47.709410   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 56/120
	I0315 07:14:48.711042   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 57/120
	I0315 07:14:49.712746   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 58/120
	I0315 07:14:50.714328   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 59/120
	I0315 07:14:51.716083   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 60/120
	I0315 07:14:52.717752   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 61/120
	I0315 07:14:53.719339   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 62/120
	I0315 07:14:54.720855   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 63/120
	I0315 07:14:55.722342   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 64/120
	I0315 07:14:56.724390   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 65/120
	I0315 07:14:57.725829   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 66/120
	I0315 07:14:58.727310   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 67/120
	I0315 07:14:59.729055   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 68/120
	I0315 07:15:00.730601   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 69/120
	I0315 07:15:01.733558   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 70/120
	I0315 07:15:02.734925   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 71/120
	I0315 07:15:03.736383   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 72/120
	I0315 07:15:04.737850   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 73/120
	I0315 07:15:05.739373   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 74/120
	I0315 07:15:06.742055   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 75/120
	I0315 07:15:07.743497   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 76/120
	I0315 07:15:08.744996   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 77/120
	I0315 07:15:09.746349   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 78/120
	I0315 07:15:10.748151   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 79/120
	I0315 07:15:11.749714   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 80/120
	I0315 07:15:12.751106   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 81/120
	I0315 07:15:13.752766   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 82/120
	I0315 07:15:14.754136   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 83/120
	I0315 07:15:15.755555   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 84/120
	I0315 07:15:16.757795   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 85/120
	I0315 07:15:17.759367   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 86/120
	I0315 07:15:18.760812   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 87/120
	I0315 07:15:19.762285   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 88/120
	I0315 07:15:20.763784   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 89/120
	I0315 07:15:21.766271   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 90/120
	I0315 07:15:22.768143   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 91/120
	I0315 07:15:23.769759   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 92/120
	I0315 07:15:24.771452   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 93/120
	I0315 07:15:25.773015   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 94/120
	I0315 07:15:26.775296   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 95/120
	I0315 07:15:27.777126   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 96/120
	I0315 07:15:28.778603   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 97/120
	I0315 07:15:29.780109   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 98/120
	I0315 07:15:30.782095   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 99/120
	I0315 07:15:31.783768   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 100/120
	I0315 07:15:32.785346   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 101/120
	I0315 07:15:33.786850   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 102/120
	I0315 07:15:34.788741   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 103/120
	I0315 07:15:35.790223   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 104/120
	I0315 07:15:36.792870   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 105/120
	I0315 07:15:37.794557   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 106/120
	I0315 07:15:38.795939   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 107/120
	I0315 07:15:39.797323   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 108/120
	I0315 07:15:40.798818   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 109/120
	I0315 07:15:41.800155   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 110/120
	I0315 07:15:42.801615   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 111/120
	I0315 07:15:43.803123   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 112/120
	I0315 07:15:44.804777   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 113/120
	I0315 07:15:45.806371   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 114/120
	I0315 07:15:46.808963   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 115/120
	I0315 07:15:47.810279   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 116/120
	I0315 07:15:48.811795   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 117/120
	I0315 07:15:49.813384   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 118/120
	I0315 07:15:50.814834   57003 main.go:141] libmachine: (no-preload-184055) Waiting for machine to stop 119/120
	I0315 07:15:51.816287   57003 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 07:15:51.816338   57003 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 07:15:51.818577   57003 out.go:177] 
	W0315 07:15:51.820357   57003 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 07:15:51.820378   57003 out.go:239] * 
	* 
	W0315 07:15:51.823032   57003 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:15:51.824558   57003 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-184055 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055: exit status 3 (18.450495851s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:16:10.276818   57504 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host
	E0315 07:16:10.276849   57504 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-184055" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (729.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0315 07:14:58.533040   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m6.201465117s)

                                                
                                                
-- stdout --
	* [old-k8s-version-981420] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-981420" primary control-plane node in "old-k8s-version-981420" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:14:27.489290   57277 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:14:27.489523   57277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:14:27.489532   57277 out.go:304] Setting ErrFile to fd 2...
	I0315 07:14:27.489536   57277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:14:27.489729   57277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:14:27.490300   57277 out.go:298] Setting JSON to false
	I0315 07:14:27.491184   57277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6964,"bootTime":1710479904,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:14:27.491255   57277 start.go:139] virtualization: kvm guest
	I0315 07:14:27.493620   57277 out.go:177] * [old-k8s-version-981420] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:14:27.495444   57277 notify.go:220] Checking for updates...
	I0315 07:14:27.497099   57277 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:14:27.498659   57277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:14:27.500167   57277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:14:27.501645   57277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:14:27.503262   57277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:14:27.504757   57277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:14:27.506671   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:14:27.507054   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:14:27.507113   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:14:27.522330   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0315 07:14:27.522785   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:14:27.523368   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:14:27.523389   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:14:27.523731   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:14:27.523931   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:14:27.526041   57277 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0315 07:14:27.527327   57277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:14:27.527630   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:14:27.527663   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:14:27.542177   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0315 07:14:27.542569   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:14:27.543093   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:14:27.543112   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:14:27.543466   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:14:27.543692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:14:27.581660   57277 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:14:27.583275   57277 start.go:297] selected driver: kvm2
	I0315 07:14:27.583302   57277 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:14:27.583435   57277 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:14:27.584119   57277 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:14:27.584201   57277 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:14:27.599305   57277 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:14:27.599658   57277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:14:27.599721   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:14:27.599734   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:14:27.599782   57277 start.go:340] cluster config:
	{Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:14:27.599873   57277 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:14:27.601887   57277 out.go:177] * Starting "old-k8s-version-981420" primary control-plane node in "old-k8s-version-981420" cluster
	I0315 07:14:27.603152   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:14:27.603202   57277 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 07:14:27.603213   57277 cache.go:56] Caching tarball of preloaded images
	I0315 07:14:27.603307   57277 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:14:27.603321   57277 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 07:14:27.603413   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:14:27.603592   57277 start.go:360] acquireMachinesLock for old-k8s-version-981420: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	* 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	* 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-981420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (250.616859ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25: (1.612879586s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.472147658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487595472117268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1806269a-9017-416a-b0ee-246b6dd2e01a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.473055863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e814d9d6-d71e-4327-a13d-7800b93c1247 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.473104743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e814d9d6-d71e-4327-a13d-7800b93c1247 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.473138443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e814d9d6-d71e-4327-a13d-7800b93c1247 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.510386009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eee4a0f5-6db6-4064-a26e-c06224af0fb3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.510567099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eee4a0f5-6db6-4064-a26e-c06224af0fb3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.513269269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c5f32ae-877f-4fc6-a995-af63d5e93138 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.513757991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487595513736448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c5f32ae-877f-4fc6-a995-af63d5e93138 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.515622435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77a214a9-1246-417e-b8cf-3d15570a03c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.515737104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77a214a9-1246-417e-b8cf-3d15570a03c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.515832464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=77a214a9-1246-417e-b8cf-3d15570a03c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.550925720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeb8c6a0-38e1-476b-a526-5cdaa7c41446 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.551041199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeb8c6a0-38e1-476b-a526-5cdaa7c41446 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.552360315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0ae8285-c8a4-404c-a809-fd4ccaceb505 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.552819803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487595552796347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0ae8285-c8a4-404c-a809-fd4ccaceb505 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.553371763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94e03a0a-ce38-4c94-90aa-cc2536493790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.553446136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94e03a0a-ce38-4c94-90aa-cc2536493790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.553483326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=94e03a0a-ce38-4c94-90aa-cc2536493790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.590894827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=907d1259-c3e4-428e-92d0-0dbb6c3ebe21 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.591008723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=907d1259-c3e4-428e-92d0-0dbb6c3ebe21 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.592414090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8808025-08a2-49ae-9713-a513ac3f5e4e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.592904960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487595592882040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8808025-08a2-49ae-9713-a513ac3f5e4e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.593469411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abff3625-de9d-4039-a4ce-fbae83f742b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.593606841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abff3625-de9d-4039-a4ce-fbae83f742b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:26:35 old-k8s-version-981420 crio[649]: time="2024-03-15 07:26:35.593647231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=abff3625-de9d-4039-a4ce-fbae83f742b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar15 07:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054732] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711901] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.844497] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.626265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.561722] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.063802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070293] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.224970] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.142626] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.286086] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.591583] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.077354] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095694] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +9.234531] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 07:22] systemd-fstab-generator[4974]: Ignoring "noauto" option for root device
	[Mar15 07:24] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.078685] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 07:26:35 up 8 min,  0 users,  load average: 0.02, 0.10, 0.07
	Linux old-k8s-version-981420 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a60c0, 0xc000a49950)
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: goroutine 161 [select]:
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000939ef0, 0x4f0ac20, 0xc000175950, 0x1, 0xc0000a60c0)
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000bae2a0, 0xc0000a60c0)
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a4b2b0, 0xc000a75a20)
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5432]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 15 07:26:33 old-k8s-version-981420 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 15 07:26:33 old-k8s-version-981420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 15 07:26:33 old-k8s-version-981420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 15 07:26:33 old-k8s-version-981420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 15 07:26:33 old-k8s-version-981420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5498]: I0315 07:26:33.997213    5498 server.go:416] Version: v1.20.0
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5498]: I0315 07:26:33.997676    5498 server.go:837] Client rotation is on, will bootstrap in background
	Mar 15 07:26:33 old-k8s-version-981420 kubelet[5498]: I0315 07:26:33.999771    5498 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 15 07:26:34 old-k8s-version-981420 kubelet[5498]: W0315 07:26:34.000789    5498 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 15 07:26:34 old-k8s-version-981420 kubelet[5498]: I0315 07:26:34.000943    5498 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (251.837487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-981420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (729.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055: exit status 3 (3.199907553s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:16:13.476920   57578 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host
	E0315 07:16:13.476940   57578 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-184055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-184055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152416197s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-184055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
E0315 07:16:21.577828   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055: exit status 3 (3.063662928s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:16:22.692922   57638 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host
	E0315 07:16:22.692944   57638 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-184055" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:32:26.794829498 +0000 UTC m=+5791.798538145
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-128870 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-128870 logs -n 25: (2.16258151s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.319446014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487948319418446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df220449-c957-4c7a-9c97-72835fb78380 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.320322620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18d46892-b7f4-49cd-94c2-dd808b998b55 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.320378706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18d46892-b7f4-49cd-94c2-dd808b998b55 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.320553725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18d46892-b7f4-49cd-94c2-dd808b998b55 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.365173489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dffc9f7e-f773-4318-9a3d-efcf35e6664e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.365278114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dffc9f7e-f773-4318-9a3d-efcf35e6664e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.366731208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1236c24-5576-4978-a456-bef8b463b011 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.367913266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487948367811018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1236c24-5576-4978-a456-bef8b463b011 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.371861027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df0f99ca-9cec-4cd4-a0d7-82e19300b582 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.371916086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df0f99ca-9cec-4cd4-a0d7-82e19300b582 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.373562252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df0f99ca-9cec-4cd4-a0d7-82e19300b582 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.420793878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13b7dac3-76d0-4df7-8aa8-236f78468aa3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.420907740Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13b7dac3-76d0-4df7-8aa8-236f78468aa3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.423444761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbe1abc5-c6cb-4f1c-b321-e7ef728b7707 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.424068146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487948424039326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbe1abc5-c6cb-4f1c-b321-e7ef728b7707 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.424831931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df9b144a-62ac-4660-9006-97be290d7025 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.424908766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df9b144a-62ac-4660-9006-97be290d7025 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.425294129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df9b144a-62ac-4660-9006-97be290d7025 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.463400886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=393d84dd-1b47-498e-8ab3-18bc29eb1f2f name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.463474762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=393d84dd-1b47-498e-8ab3-18bc29eb1f2f name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.464666639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e729d03a-098f-4463-9e66-cb122ca75f72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.465157878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487948465132402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e729d03a-098f-4463-9e66-cb122ca75f72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.465832768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=306edd80-6638-4f6a-bb91-331f6e011848 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.465907947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=306edd80-6638-4f6a-bb91-331f6e011848 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:28 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:32:28.466198185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=306edd80-6638-4f6a-bb91-331f6e011848 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61f7b2f15345f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d8dfe67e86c22       storage-provisioner
	e8a70cf1fab35       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   ea4bb26b63178       kube-proxy-97bfn
	4d71da0a84bc1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   0a53fb2b72649       coredns-5dd5756b68-5gtx2
	261827033d961       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   07cf2940e19d6       coredns-5dd5756b68-4g87j
	468a9df4ca260       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   f983f8bb37d97       kube-scheduler-default-k8s-diff-port-128870
	be192990cd7f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   94a97656f4e95       etcd-default-k8s-diff-port-128870
	88b5ef91f5aff       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   a50e57b625a54       kube-apiserver-default-k8s-diff-port-128870
	e191efaaf507a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   644af00947fb2       kube-controller-manager-default-k8s-diff-port-128870
	
	
	==> coredns [261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-128870
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-128870
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=default-k8s-diff-port-128870
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:23:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-128870
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:32:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:28:33 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:28:33 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:28:33 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:28:33 +0000   Fri, 15 Mar 2024 07:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.123
	  Hostname:    default-k8s-diff-port-128870
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8ea27385ac541ca83767e82a1f9ffde
	  System UUID:                f8ea2738-5ac5-41ca-8376-7e82a1f9ffde
	  Boot ID:                    753fbe63-8d97-4300-8c5c-eafbaec56475
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4g87j                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-5dd5756b68-5gtx2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-128870                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-128870             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-128870    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-proxy-97bfn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-128870             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-59mcw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-128870 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s                  kubelet          Node default-k8s-diff-port-128870 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-128870 event: Registered Node default-k8s-diff-port-128870 in Controller
	
	
	==> dmesg <==
	[  +0.052933] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528820] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.813920] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.633903] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar15 07:18] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.059279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065142] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.257862] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.138155] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.254923] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.165236] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +0.069907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.921342] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +6.354924] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.613664] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 07:22] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.538143] systemd-fstab-generator[3374]: Ignoring "noauto" option for root device
	[Mar15 07:23] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.524420] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[ +12.930505] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.109819] kauditd_printk_skb: 14 callbacks suppressed
	[Mar15 07:24] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a] <==
	{"level":"info","ts":"2024-03-15T07:23:01.892799Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.123:2380"}
	{"level":"info","ts":"2024-03-15T07:23:01.8933Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"2472baf7c187d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-15T07:23:01.893524Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:23:01.893583Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:23:01.894845Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:23:01.909798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d switched to configuration voters=(641202906732669)"}
	{"level":"info","ts":"2024-03-15T07:23:01.910366Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9e93ed333c2c6154","local-member-id":"2472baf7c187d","added-peer-id":"2472baf7c187d","added-peer-peer-urls":["https://192.168.50.123:2380"]}
	{"level":"info","ts":"2024-03-15T07:23:01.948998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.949351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.94947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d received MsgPreVoteResp from 2472baf7c187d at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.949509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d received MsgVoteResp from 2472baf7c187d at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became leader at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2472baf7c187d elected leader 2472baf7c187d at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.951314Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2472baf7c187d","local-member-attributes":"{Name:default-k8s-diff-port-128870 ClientURLs:[https://192.168.50.123:2379]}","request-path":"/0/members/2472baf7c187d/attributes","cluster-id":"9e93ed333c2c6154","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:23:01.951722Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.951865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:23:01.960848Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:23:01.964011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:23:01.963067Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:23:01.964135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:23:01.965085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.123:2379"}
	{"level":"info","ts":"2024-03-15T07:23:01.965613Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9e93ed333c2c6154","local-member-id":"2472baf7c187d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.96964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.969763Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 07:32:28 up 14 min,  0 users,  load average: 0.22, 0.33, 0.28
	Linux default-k8s-diff-port-128870 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3] <==
	W0315 07:28:05.147365       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:28:05.147468       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:28:05.147477       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:28:05.147589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:28:05.147701       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:28:05.148668       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:29:04.024042       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:29:05.148690       1 handler_proxy.go:93] no RequestInfo found in the context
	W0315 07:29:05.148763       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:05.148979       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:29:05.149032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0315 07:29:05.148934       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:29:05.150712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:30:04.023800       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:31:04.024153       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:31:05.150088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:31:05.150166       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:31:05.150181       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:31:05.151215       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:31:05.151661       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:31:05.151768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:32:04.024173       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e] <==
	I0315 07:26:50.466177       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:27:20.013070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:27:20.476608       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:27:50.020130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:27:50.485277       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:20.026415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:20.494191       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:50.031900       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:50.504350       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:29:11.850589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="341.046µs"
	E0315 07:29:20.039041       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:20.513990       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:29:24.849485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="120.453µs"
	E0315 07:29:50.044791       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:50.524288       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:30:20.052568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:20.534110       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:30:50.058119       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:50.542567       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:20.065047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:20.553304       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:50.071439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:50.562571       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:32:20.078288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:32:20.571038       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146] <==
	I0315 07:23:23.177069       1 server_others.go:69] "Using iptables proxy"
	I0315 07:23:23.224915       1 node.go:141] Successfully retrieved node IP: 192.168.50.123
	I0315 07:23:23.319027       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 07:23:23.319075       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:23:23.322485       1 server_others.go:152] "Using iptables Proxier"
	I0315 07:23:23.323180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:23:23.323829       1 server.go:846] "Version info" version="v1.28.4"
	I0315 07:23:23.323874       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:23:23.325164       1 config.go:188] "Starting service config controller"
	I0315 07:23:23.325595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:23:23.325717       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:23:23.325813       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:23:23.328004       1 config.go:315] "Starting node config controller"
	I0315 07:23:23.328035       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:23:23.426231       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:23:23.426239       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:23:23.428794       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb] <==
	W0315 07:23:04.183042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:04.183080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.099556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:23:05.099676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 07:23:05.122085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 07:23:05.122448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 07:23:05.163142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.163191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.194582       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:23:05.194637       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:23:05.235726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:23:05.235820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 07:23:05.339008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:23:05.339035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 07:23:05.456594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.456644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.518152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.518198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.531363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:23:05.531436       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 07:23:05.547100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 07:23:05.547206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 07:23:05.552195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 07:23:05.552246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0315 07:23:07.070190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:30:07 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:30:07.963568    3702 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:30:07 default-k8s-diff-port-128870 kubelet[3702]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:30:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:30:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:30:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:30:17 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:30:17.831292    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:30:31 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:30:31.829819    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:30:46 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:30:46.831751    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:30:59 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:30:59.829625    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:31:07 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:31:07.964843    3702 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:31:07 default-k8s-diff-port-128870 kubelet[3702]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:31:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:31:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:31:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:31:11 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:31:11.831746    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:31:25 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:31:25.831385    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:31:37 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:31:37.830462    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:31:51 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:31:51.830818    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:32:06 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:32:06.831164    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:32:07 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:32:07.967365    3702 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:32:07 default-k8s-diff-port-128870 kubelet[3702]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:32:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:32:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:32:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:32:21 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:32:21.829817    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	
	
	==> storage-provisioner [61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01] <==
	I0315 07:23:23.257799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:23:23.278290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:23:23.278582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:23:23.295526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:23:23.298358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63!
	I0315 07:23:23.298576       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"093a883c-531b-45ef-aa8e-3f41d4f9810b", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63 became leader
	I0315 07:23:23.398654       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-59mcw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw: exit status 1 (63.051261ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-59mcw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0315 07:24:21.071695   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184055 -n no-preload-184055
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:32:37.961245678 +0000 UTC m=+5802.964954332
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-184055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-184055 logs -n 25: (2.102326197s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.462798143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487959462767777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=799ff3ac-3fd7-42e5-b1e7-dfbd0e3d14f8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.463516584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13b29199-6952-4f8e-9577-17c60d4a7a13 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.463606392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13b29199-6952-4f8e-9577-17c60d4a7a13 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.463949228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13b29199-6952-4f8e-9577-17c60d4a7a13 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.503343123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5890ec2-90c1-4441-bba2-9efdd989e57e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.503568623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5890ec2-90c1-4441-bba2-9efdd989e57e name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.504763907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ccad8a0-489d-4260-aed2-5fe581292b6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.505171758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487959505150563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ccad8a0-489d-4260-aed2-5fe581292b6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.505734238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef546b61-834c-4662-82a5-dd397a0cf726 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.505794370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef546b61-834c-4662-82a5-dd397a0cf726 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.506043703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef546b61-834c-4662-82a5-dd397a0cf726 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.549012518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b717007e-b194-4fc7-a93a-24d19cd2ab43 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.549085095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b717007e-b194-4fc7-a93a-24d19cd2ab43 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.550505512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bced78c-da35-4d05-b953-da4c083130d9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.551275665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487959551210838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bced78c-da35-4d05-b953-da4c083130d9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.552068128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc7475cd-eb66-4934-9a6b-e0bd016ddb74 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.552430446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc7475cd-eb66-4934-9a6b-e0bd016ddb74 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.552712544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc7475cd-eb66-4934-9a6b-e0bd016ddb74 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.594285240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29794c58-af8d-40a7-8705-ad355d1282c6 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.594393428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29794c58-af8d-40a7-8705-ad355d1282c6 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.596046318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11d0f05d-4f48-4bd2-8c88-96726d071718 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.596479759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710487959596456792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11d0f05d-4f48-4bd2-8c88-96726d071718 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.597045469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4715201d-e182-4945-bd1d-d9560892c971 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.597140825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4715201d-e182-4945-bd1d-d9560892c971 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:32:39 no-preload-184055 crio[692]: time="2024-03-15 07:32:39.597353144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4715201d-e182-4945-bd1d-d9560892c971 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1c3aa6c23ece       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   7341ebe223c09       storage-provisioner
	80e925fa3d211       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d833e77fa86e1       busybox
	3e3a341887d9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   406657ad057df       coredns-76f75df574-tc5zh
	4ba10dcc803b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   7341ebe223c09       storage-provisioner
	ca87ab91e305f       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   dd5f1ed1a17b1       kube-proxy-977jm
	2820074ba55a6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   45511ccba735a       kube-apiserver-no-preload-184055
	a234f9f8e0d8d       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   586f0e7088ddb       kube-controller-manager-no-preload-184055
	1c840a3842d52       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   93f87d655960b       etcd-no-preload-184055
	461e402c50f1c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   895dc958fcc28       kube-scheduler-no-preload-184055
	
	
	==> coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36696 - 37930 "HINFO IN 5426171196768362982.3097198221435832737. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010928711s
	
	
	==> describe nodes <==
	Name:               no-preload-184055
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-184055
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=no-preload-184055
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_12_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:12:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-184055
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:32:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:29:52 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:29:52 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:29:52 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:29:52 +0000   Fri, 15 Mar 2024 07:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.106
	  Hostname:    no-preload-184055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b57f50a2f704415298aec56860814624
	  System UUID:                b57f50a2-f704-4152-98ae-c56860814624
	  Boot ID:                    875c1d52-cf3e-4250-b823-726e2af71c9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-76f75df574-tc5zh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-no-preload-184055                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-no-preload-184055             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-no-preload-184055    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-977jm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-no-preload-184055             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-57f55c9bc5-gwnxc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-184055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-184055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-184055 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node no-preload-184055 status is now: NodeReady
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-184055 event: Registered Node no-preload-184055 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-184055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-184055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-184055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-184055 event: Registered Node no-preload-184055 in Controller
	
	
	==> dmesg <==
	[Mar15 07:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062428] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.928536] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.644272] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.296383] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.059247] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069343] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.220363] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.128538] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.254777] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[Mar15 07:19] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.066798] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.237176] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +2.967027] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.022702] kauditd_printk_skb: 13 callbacks suppressed
	[  +2.092732] systemd-fstab-generator[1947]: Ignoring "noauto" option for root device
	[  +3.000365] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.326276] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] <==
	{"level":"warn","ts":"2024-03-15T07:19:11.377606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.28094ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14596604277017164686 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" mod_revision:360 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" value_size:6431 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T07:19:11.377691Z","caller":"traceutil/trace.go:171","msg":"trace[336105702] linearizableReadLoop","detail":"{readStateIndex:518; appliedIndex:517; }","duration":"1.084174044s","start":"2024-03-15T07:19:10.293497Z","end":"2024-03-15T07:19:11.377671Z","steps":["trace[336105702] 'read index received'  (duration: 673.857364ms)","trace[336105702] 'applied index is now lower than readState.Index'  (duration: 410.315886ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T07:19:11.378074Z","caller":"traceutil/trace.go:171","msg":"trace[766944469] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"1.084944098s","start":"2024-03-15T07:19:10.293112Z","end":"2024-03-15T07:19:11.378056Z","steps":["trace[766944469] 'process raft request'  (duration: 674.162853ms)","trace[766944469] 'compare'  (duration: 410.153593ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:19:11.378206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:10.293101Z","time spent":"1.085017319s","remote":"127.0.0.1:38502","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6507,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" mod_revision:360 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" value_size:6431 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-184055\" > >"}
	{"level":"warn","ts":"2024-03-15T07:19:11.378495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.085004603s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:monitoring\" ","response":"range_response_count:1 size:634"}
	{"level":"info","ts":"2024-03-15T07:19:11.378527Z","caller":"traceutil/trace.go:171","msg":"trace[1182388364] range","detail":"{range_begin:/registry/clusterroles/system:monitoring; range_end:; response_count:1; response_revision:491; }","duration":"1.085038669s","start":"2024-03-15T07:19:10.293479Z","end":"2024-03-15T07:19:11.378518Z","steps":["trace[1182388364] 'agreement among raft nodes before linearized reading'  (duration: 1.084942714s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:19:11.378553Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:10.293471Z","time spent":"1.085074794s","remote":"127.0.0.1:38680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":656,"request content":"key:\"/registry/clusterroles/system:monitoring\" "}
	{"level":"warn","ts":"2024-03-15T07:19:11.378718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"950.016616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T07:19:11.378743Z","caller":"traceutil/trace.go:171","msg":"trace[63078923] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:491; }","duration":"950.042409ms","start":"2024-03-15T07:19:10.428693Z","end":"2024-03-15T07:19:11.378736Z","steps":["trace[63078923] 'agreement among raft nodes before linearized reading'  (duration: 949.996351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:19:11.378766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:10.428665Z","time spent":"950.096574ms","remote":"127.0.0.1:38336","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-15T07:19:12.064675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.244363ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14596604277017164696 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" mod_revision:281 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" value_size:4404 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T07:19:12.064775Z","caller":"traceutil/trace.go:171","msg":"trace[579099425] linearizableReadLoop","detail":"{readStateIndex:519; appliedIndex:518; }","duration":"671.871295ms","start":"2024-03-15T07:19:11.392891Z","end":"2024-03-15T07:19:12.064762Z","steps":["trace[579099425] 'read index received'  (duration: 475.434414ms)","trace[579099425] 'applied index is now lower than readState.Index'  (duration: 196.43593ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T07:19:12.065181Z","caller":"traceutil/trace.go:171","msg":"trace[202907804] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"673.771079ms","start":"2024-03-15T07:19:11.391398Z","end":"2024-03-15T07:19:12.065169Z","steps":["trace[202907804] 'process raft request'  (duration: 476.981747ms)","trace[202907804] 'compare'  (duration: 196.126139ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:19:12.065812Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:11.391381Z","time spent":"674.246346ms","remote":"127.0.0.1:38502","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4471,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" mod_revision:281 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" value_size:4404 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-184055\" > >"}
	{"level":"warn","ts":"2024-03-15T07:19:12.06619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"673.370551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2024-03-15T07:19:12.066235Z","caller":"traceutil/trace.go:171","msg":"trace[1386136098] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:492; }","duration":"673.420176ms","start":"2024-03-15T07:19:11.392806Z","end":"2024-03-15T07:19:12.066227Z","steps":["trace[1386136098] 'agreement among raft nodes before linearized reading'  (duration: 673.327395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:19:12.06626Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:11.392798Z","time spent":"673.453023ms","remote":"127.0.0.1:38680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":862,"request content":"key:\"/registry/clusterroles/system:aggregate-to-admin\" "}
	{"level":"warn","ts":"2024-03-15T07:19:12.06631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"546.765942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T07:19:12.066359Z","caller":"traceutil/trace.go:171","msg":"trace[1580701658] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:492; }","duration":"546.821596ms","start":"2024-03-15T07:19:11.519529Z","end":"2024-03-15T07:19:12.066351Z","steps":["trace[1580701658] 'agreement among raft nodes before linearized reading'  (duration: 546.63065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:19:12.066405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:19:11.519516Z","time spent":"546.87708ms","remote":"127.0.0.1:38336","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-15T07:19:12.066466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.550224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-184055\" ","response":"range_response_count:1 size:4619"}
	{"level":"info","ts":"2024-03-15T07:19:12.066512Z","caller":"traceutil/trace.go:171","msg":"trace[231585589] range","detail":"{range_begin:/registry/minions/no-preload-184055; range_end:; response_count:1; response_revision:492; }","duration":"115.594287ms","start":"2024-03-15T07:19:11.950909Z","end":"2024-03-15T07:19:12.066504Z","steps":["trace[231585589] 'agreement among raft nodes before linearized reading'  (duration: 115.532125ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:29:07.822138Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2024-03-15T07:29:07.827568Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"4.789753ms","hash":1769709238}
	{"level":"info","ts":"2024-03-15T07:29:07.827679Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1769709238,"revision":842,"compact-revision":-1}
	
	
	==> kernel <==
	 07:32:39 up 14 min,  0 users,  load average: 0.13, 0.19, 0.13
	Linux no-preload-184055 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] <==
	I0315 07:27:10.319354       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:29:09.321517       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:09.322771       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0315 07:29:10.323352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:10.323422       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:29:10.323436       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:29:10.323531       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:10.323623       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:29:10.324612       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:30:10.324751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:30:10.324945       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:30:10.324957       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:30:10.324751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:30:10.325024       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:30:10.327025       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:32:10.326207       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:32:10.326437       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:32:10.326457       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:32:10.327436       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:32:10.327596       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:32:10.327662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] <==
	I0315 07:26:54.286733       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:27:23.800726       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:27:24.295388       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:27:53.808540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:27:54.303678       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:23.815384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:24.313420       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:53.820476       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:54.324631       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:29:23.826668       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:24.335028       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:29:53.835509       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:54.345246       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:30:23.841680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:24.353716       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:30:28.604405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="204.888µs"
	I0315 07:30:43.601564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="145.538µs"
	E0315 07:30:53.846087       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:54.367787       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:23.855806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:24.376186       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:53.861081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:54.385160       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:32:23.868741       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:32:24.393466       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] <==
	I0315 07:19:11.941362       1 server_others.go:72] "Using iptables proxy"
	I0315 07:19:12.074013       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.106"]
	I0315 07:19:12.181166       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0315 07:19:12.181243       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:19:12.181264       1 server_others.go:168] "Using iptables Proxier"
	I0315 07:19:12.188037       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:19:12.190344       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0315 07:19:12.190575       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:19:12.191535       1 config.go:188] "Starting service config controller"
	I0315 07:19:12.191609       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:19:12.191645       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:19:12.191662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:19:12.194026       1 config.go:315] "Starting node config controller"
	I0315 07:19:12.194143       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:19:12.292470       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:19:12.292734       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:19:12.294274       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] <==
	I0315 07:19:06.287433       1 serving.go:380] Generated self-signed cert in-memory
	W0315 07:19:09.256584       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:19:09.256754       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:19:09.256873       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:19:09.256906       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:19:09.364554       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0315 07:19:09.364621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:19:09.368916       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 07:19:09.369182       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 07:19:09.369229       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:19:09.369254       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 07:19:09.469398       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:30:16 no-preload-184055 kubelet[1322]: E0315 07:30:16.601186    1322 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 15 07:30:16 no-preload-184055 kubelet[1322]: E0315 07:30:16.601479    1322 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 15 07:30:16 no-preload-184055 kubelet[1322]: E0315 07:30:16.601931    1322 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9n8r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-gwnxc_kube-system(abff20ab-2240-4106-b3fc-ffce142e8069): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 15 07:30:16 no-preload-184055 kubelet[1322]: E0315 07:30:16.602058    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:30:28 no-preload-184055 kubelet[1322]: E0315 07:30:28.585647    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:30:43 no-preload-184055 kubelet[1322]: E0315 07:30:43.585724    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:30:54 no-preload-184055 kubelet[1322]: E0315 07:30:54.586469    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:31:04 no-preload-184055 kubelet[1322]: E0315 07:31:04.603998    1322 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:31:04 no-preload-184055 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:31:04 no-preload-184055 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:31:04 no-preload-184055 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:31:04 no-preload-184055 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:31:06 no-preload-184055 kubelet[1322]: E0315 07:31:06.585023    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:31:20 no-preload-184055 kubelet[1322]: E0315 07:31:20.587721    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:31:31 no-preload-184055 kubelet[1322]: E0315 07:31:31.585130    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:31:42 no-preload-184055 kubelet[1322]: E0315 07:31:42.587201    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:31:57 no-preload-184055 kubelet[1322]: E0315 07:31:57.585635    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:32:04 no-preload-184055 kubelet[1322]: E0315 07:32:04.602215    1322 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:32:04 no-preload-184055 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:32:04 no-preload-184055 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:32:04 no-preload-184055 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:32:04 no-preload-184055 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:32:09 no-preload-184055 kubelet[1322]: E0315 07:32:09.585728    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:32:24 no-preload-184055 kubelet[1322]: E0315 07:32:24.585910    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:32:38 no-preload-184055 kubelet[1322]: E0315 07:32:38.585279    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	
	
	==> storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] <==
	I0315 07:19:11.809467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0315 07:19:41.813463       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] <==
	I0315 07:19:42.928615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:19:42.945127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:19:42.945233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:20:00.346321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:20:00.346487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3!
	I0315 07:20:00.347551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb92104d-7794-46fb-a76c-f5edb625cf7c", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3 became leader
	I0315 07:20:00.447722       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184055 -n no-preload-184055
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-184055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-gwnxc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc: exit status 1 (67.24181ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-gwnxc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0315 07:24:58.532762   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709708 -n embed-certs-709708
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:33:26.094253552 +0000 UTC m=+5851.097962206
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-709708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-709708 logs -n 25: (2.192549903s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.632908610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488007632888335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6966df6f-29c1-4a9b-b141-a320f0caed16 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.633474690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24e6509c-42c6-4c35-be94-29d01b73d3e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.633526766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24e6509c-42c6-4c35-be94-29d01b73d3e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.633726526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24e6509c-42c6-4c35-be94-29d01b73d3e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.675842406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72a3bdfb-f1ce-402e-92ba-e68ca4f5db73 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.675919394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72a3bdfb-f1ce-402e-92ba-e68ca4f5db73 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.677003113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=462a88bf-6738-4eec-bbfd-a9c68247958f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.677618191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488007677592156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=462a88bf-6738-4eec-bbfd-a9c68247958f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.678736547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8686bbf9-fb2b-4c8d-8c5b-083aef11c538 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.678816912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8686bbf9-fb2b-4c8d-8c5b-083aef11c538 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.679028306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8686bbf9-fb2b-4c8d-8c5b-083aef11c538 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.730059630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=619c5614-10e2-4d31-b25a-5529077d5801 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.730202565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=619c5614-10e2-4d31-b25a-5529077d5801 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.731681697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b19187f3-2398-49e5-9a56-c195a40dfbdb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.732356875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488007732329263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b19187f3-2398-49e5-9a56-c195a40dfbdb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.733059764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d80d943-6761-4d62-bbf7-ad16bf9fc474 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.733257761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d80d943-6761-4d62-bbf7-ad16bf9fc474 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.733513622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d80d943-6761-4d62-bbf7-ad16bf9fc474 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.774476880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fec300e1-4f2a-4465-bf07-fa14b950dce5 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.774555643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fec300e1-4f2a-4465-bf07-fa14b950dce5 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.776018166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7907617-ddf1-421c-8fcc-de3c226fa644 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.776662379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488007776634106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7907617-ddf1-421c-8fcc-de3c226fa644 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.777453456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a812136-2814-419d-a63e-3960b563ee8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.777537391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a812136-2814-419d-a63e-3960b563ee8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:33:27 embed-certs-709708 crio[696]: time="2024-03-15 07:33:27.777759730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a812136-2814-419d-a63e-3960b563ee8e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb666f4e5a048       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f513769166160       storage-provisioner
	8c49534c347a8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   d1cdc3c84d40d       coredns-5dd5756b68-v2mxd
	cbd6b7eb2be22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   39ed58a89dfb1       coredns-5dd5756b68-pqjfs
	3d8e1cb9846bd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   27762be8106fe       kube-proxy-8pd5c
	96e34f8838447       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   42fb0f69a7cea       etcd-embed-certs-709708
	9837fe7649aee       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   3f3f00693e407       kube-controller-manager-embed-certs-709708
	60a71b54a648d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   a358876953f39       kube-apiserver-embed-certs-709708
	7ab47ef545847       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   a45ace10d036d       kube-scheduler-embed-certs-709708
	0ff98be2a427f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Exited              kube-apiserver            1                   85326a54746b2       kube-apiserver-embed-certs-709708
	
	
	==> coredns [8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-709708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-709708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=embed-certs-709708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:24:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-709708
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:33:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:29:37 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:29:37 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:29:37 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:29:37 +0000   Fri, 15 Mar 2024 07:24:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    embed-certs-709708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 483fafe3358b4d4181da45f3abe565d9
	  System UUID:                483fafe3-358b-4d41-81da-45f3abe565d9
	  Boot ID:                    95a6e305-918f-473c-802b-7331b9cbe3c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-pqjfs                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-5dd5756b68-v2mxd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-embed-certs-709708                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-embed-certs-709708             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-709708    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-8pd5c                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-embed-certs-709708             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-sz8z6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-709708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-709708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-709708 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s  kubelet          Node embed-certs-709708 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m8s   kubelet          Node embed-certs-709708 status is now: NodeReady
	  Normal  RegisteredNode           9m6s   node-controller  Node embed-certs-709708 event: Registered Node embed-certs-709708 in Controller
	
	
	==> dmesg <==
	[  +0.053877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042919] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.741657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.980000] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.683478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar15 07:19] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.069911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065561] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.204268] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.134165] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.285986] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +5.311504] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +0.076741] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.183433] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +5.790178] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.011429] kauditd_printk_skb: 69 callbacks suppressed
	[Mar15 07:23] kauditd_printk_skb: 3 callbacks suppressed
	[Mar15 07:24] systemd-fstab-generator[3404]: Ignoring "noauto" option for root device
	[  +4.588083] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.687372] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[ +13.392797] systemd-fstab-generator[3928]: Ignoring "noauto" option for root device
	[  +0.085473] kauditd_printk_skb: 14 callbacks suppressed
	[Mar15 07:25] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8] <==
	{"level":"info","ts":"2024-03-15T07:24:03.738557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae switched to configuration voters=(15221743556212180654)"}
	{"level":"info","ts":"2024-03-15T07:24:03.739021Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","added-peer-id":"d33e7f1dba1e46ae","added-peer-peer-urls":["https://192.168.39.80:2380"]}
	{"level":"info","ts":"2024-03-15T07:24:03.748889Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T07:24:03.753202Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-03-15T07:24:03.753364Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-03-15T07:24:03.753564Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d33e7f1dba1e46ae","initial-advertise-peer-urls":["https://192.168.39.80:2380"],"listen-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T07:24:03.75518Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T07:24:04.575457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgPreVoteResp from d33e7f1dba1e46ae at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgVoteResp from d33e7f1dba1e46ae at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became leader at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae elected leader d33e7f1dba1e46ae at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.57731Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:embed-certs-709708 ClientURLs:[https://192.168.39.80:2379]}","request-path":"/0/members/d33e7f1dba1e46ae/attributes","cluster-id":"e6a6fd39da75dc67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:24:04.577502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:24:04.579965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2024-03-15T07:24:04.580244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:24:04.582888Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:24:04.580274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:24:04.583394Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:24:04.580477Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599415Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599638Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 07:33:28 up 14 min,  0 users,  load average: 0.27, 0.34, 0.23
	Linux embed-certs-709708 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed] <==
	W0315 07:23:58.166601       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.169008       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.285751       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.322958       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.413344       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.585456       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.687598       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.787249       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.788359       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.794278       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.045937       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.114631       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.162281       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.226071       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.229669       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.239224       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.260608       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.325568       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.463870       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.677773       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.692587       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.701533       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.724227       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.759358       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.869801       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa] <==
	W0315 07:29:07.189439       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:07.189564       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:29:07.189608       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:29:07.190103       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:29:07.190296       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:29:07.191167       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:30:06.100117       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:30:07.189736       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:30:07.189923       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:30:07.189967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:30:07.191350       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:30:07.191576       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:30:07.191613       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:31:06.099344       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:32:06.100068       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:32:07.191231       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:32:07.191327       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:32:07.191339       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:32:07.192419       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:32:07.192648       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:32:07.192700       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:33:06.100104       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1] <==
	I0315 07:27:52.848886       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:22.371667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:22.858054       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:28:52.377583       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:28:52.867444       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:29:22.384326       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:22.875726       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:29:52.390490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:29:52.884490       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:30:10.627744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="286.549µs"
	E0315 07:30:22.396951       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:22.893609       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:30:23.624200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="150.574µs"
	E0315 07:30:52.404059       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:30:52.902097       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:22.411103       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:22.910745       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:31:52.418916       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:31:52.919370       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:32:22.425102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:32:22.927711       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:32:52.432187       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:32:52.936605       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:33:22.440458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:33:22.945713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad] <==
	I0315 07:24:24.602816       1 server_others.go:69] "Using iptables proxy"
	I0315 07:24:24.622582       1 node.go:141] Successfully retrieved node IP: 192.168.39.80
	I0315 07:24:24.748610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 07:24:24.748629       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:24:24.754439       1 server_others.go:152] "Using iptables Proxier"
	I0315 07:24:24.755221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:24:24.758382       1 server.go:846] "Version info" version="v1.28.4"
	I0315 07:24:24.758395       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:24:24.761237       1 config.go:188] "Starting service config controller"
	I0315 07:24:24.761284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:24:24.761310       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:24:24.761317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:24:24.765721       1 config.go:315] "Starting node config controller"
	I0315 07:24:24.765733       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:24:24.862226       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:24:24.862262       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:24:24.866890       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151] <==
	W0315 07:24:07.014115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.014338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 07:24:07.029109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:24:07.029204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 07:24:07.112042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:24:07.112220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 07:24:07.286747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 07:24:07.287535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 07:24:07.361338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 07:24:07.361391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 07:24:07.389421       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:24:07.390060       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:24:07.441548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 07:24:07.441671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 07:24:07.459500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:24:07.459547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 07:24:07.469951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.470249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 07:24:07.555521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:24:07.555862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 07:24:07.568481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 07:24:07.568528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 07:24:07.573853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.573875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0315 07:24:09.800484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:31:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:31:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:31:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:31:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:31:18 embed-certs-709708 kubelet[3731]: E0315 07:31:18.607312    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:31:32 embed-certs-709708 kubelet[3731]: E0315 07:31:32.607839    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:31:43 embed-certs-709708 kubelet[3731]: E0315 07:31:43.607573    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:31:54 embed-certs-709708 kubelet[3731]: E0315 07:31:54.607022    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:32:05 embed-certs-709708 kubelet[3731]: E0315 07:32:05.607430    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:32:09 embed-certs-709708 kubelet[3731]: E0315 07:32:09.633523    3731 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:32:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:32:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:32:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:32:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:32:20 embed-certs-709708 kubelet[3731]: E0315 07:32:20.608374    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:32:34 embed-certs-709708 kubelet[3731]: E0315 07:32:34.608929    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:32:49 embed-certs-709708 kubelet[3731]: E0315 07:32:49.608285    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:33:02 embed-certs-709708 kubelet[3731]: E0315 07:33:02.608107    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:33:09 embed-certs-709708 kubelet[3731]: E0315 07:33:09.633474    3731 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:33:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:33:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:33:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:33:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:33:14 embed-certs-709708 kubelet[3731]: E0315 07:33:14.607683    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:33:27 embed-certs-709708 kubelet[3731]: E0315 07:33:27.608949    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	
	
	==> storage-provisioner [fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d] <==
	I0315 07:24:25.883491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:24:25.896982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:24:25.897201       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:24:25.906431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:24:25.906897       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd!
	I0315 07:24:25.907793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"554fdb25-4aa4-4a43-b92d-ef6385b035d4", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd became leader
	I0315 07:24:26.007696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-709708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-sz8z6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6: exit status 1 (67.139731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-sz8z6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:29:21.071493   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:29:58.532311   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:33:01.578895   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:34:21.072130   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:34:58.532162   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (249.641811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-981420" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (243.769675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25: (1.609929473s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.881954056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488138881921669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=944892cb-beb8-4dca-b12c-27c39d036be9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.882611189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16e71d5b-530f-4b74-adb9-8eec4bd418a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.882670246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16e71d5b-530f-4b74-adb9-8eec4bd418a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.882701536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16e71d5b-530f-4b74-adb9-8eec4bd418a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.916117429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a6f8c1e-7e81-4c24-8319-ab2d9e9ec1f5 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.916233227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a6f8c1e-7e81-4c24-8319-ab2d9e9ec1f5 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.917973458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe0f276c-032a-45c3-8076-5779ea9991d9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.918363497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488138918319120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe0f276c-032a-45c3-8076-5779ea9991d9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.918979824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad36eda8-97aa-4c51-8832-f9ee467bf751 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.919037961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad36eda8-97aa-4c51-8832-f9ee467bf751 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.919069422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad36eda8-97aa-4c51-8832-f9ee467bf751 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.953240855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae893230-1a8f-486f-a76a-3721bec4ed0f name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.953360847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae893230-1a8f-486f-a76a-3721bec4ed0f name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.954692173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1089741-4253-4cb3-ac69-5696a2e1dd86 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.955085754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488138955057161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1089741-4253-4cb3-ac69-5696a2e1dd86 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.955775359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9647e825-e67c-47a9-8959-7c8dbf54d13c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.955855547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9647e825-e67c-47a9-8959-7c8dbf54d13c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.955896390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9647e825-e67c-47a9-8959-7c8dbf54d13c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.993106009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28306363-13a5-44ee-8e20-6b4b0a7b9e2c name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.993216822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28306363-13a5-44ee-8e20-6b4b0a7b9e2c name=/runtime.v1.RuntimeService/Version
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.994833041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85d01a08-2287-4af6-bb51-8609ae431fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.995379689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488138995340187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85d01a08-2287-4af6-bb51-8609ae431fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.996228270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58d6f6cd-a74a-45b6-be6b-4043dfa2a281 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.996278548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58d6f6cd-a74a-45b6-be6b-4043dfa2a281 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:35:38 old-k8s-version-981420 crio[649]: time="2024-03-15 07:35:38.996309297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58d6f6cd-a74a-45b6-be6b-4043dfa2a281 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar15 07:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054732] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711901] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.844497] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.626265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.561722] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.063802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070293] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.224970] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.142626] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.286086] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.591583] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.077354] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095694] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +9.234531] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 07:22] systemd-fstab-generator[4974]: Ignoring "noauto" option for root device
	[Mar15 07:24] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.078685] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 07:35:39 up 17 min,  0 users,  load average: 0.05, 0.06, 0.06
	Linux old-k8s-version-981420 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bb2e70, 0xc000c2af40)
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: goroutine 149 [syscall]:
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: syscall.Syscall6(0xe8, 0xe, 0xc000e8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000e8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000ccdf20, 0x0, 0x0, 0x0)
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0008fe780)
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Mar 15 07:35:35 old-k8s-version-981420 kubelet[6431]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Mar 15 07:35:35 old-k8s-version-981420 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 15 07:35:35 old-k8s-version-981420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 15 07:35:36 old-k8s-version-981420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 15 07:35:36 old-k8s-version-981420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 15 07:35:36 old-k8s-version-981420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 15 07:35:36 old-k8s-version-981420 kubelet[6440]: I0315 07:35:36.717630    6440 server.go:416] Version: v1.20.0
	Mar 15 07:35:36 old-k8s-version-981420 kubelet[6440]: I0315 07:35:36.717998    6440 server.go:837] Client rotation is on, will bootstrap in background
	Mar 15 07:35:36 old-k8s-version-981420 kubelet[6440]: I0315 07:35:36.720332    6440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 15 07:35:36 old-k8s-version-981420 kubelet[6440]: I0315 07:35:36.721728    6440 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 15 07:35:36 old-k8s-version-981420 kubelet[6440]: W0315 07:35:36.721782    6440 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (248.459257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-981420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (371.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:38:40.407011509 +0000 UTC m=+6165.410720162
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.855µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-128870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-128870 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-128870 logs -n 25: (1.344618455s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:37 UTC | 15 Mar 24 07:37 UTC |
	| start   | -p newest-cni-027190 --memory=2200 --alsologtostderr   | newest-cni-027190            | jenkins | v1.32.0 | 15 Mar 24 07:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:38 UTC | 15 Mar 24 07:38 UTC |
	| start   | -p auto-636355 --memory=3072                           | auto-636355                  | jenkins | v1.32.0 | 15 Mar 24 07:38 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:38:34
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:38:34.380881   63149 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:38:34.381030   63149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:38:34.381041   63149 out.go:304] Setting ErrFile to fd 2...
	I0315 07:38:34.381047   63149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:38:34.381243   63149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:38:34.381871   63149 out.go:298] Setting JSON to false
	I0315 07:38:34.382886   63149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8411,"bootTime":1710479904,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:38:34.382972   63149 start.go:139] virtualization: kvm guest
	I0315 07:38:34.385750   63149 out.go:177] * [auto-636355] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:38:34.387899   63149 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:38:34.387958   63149 notify.go:220] Checking for updates...
	I0315 07:38:34.389765   63149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:38:34.391271   63149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:38:34.393168   63149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:38:34.394950   63149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:38:34.396714   63149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:38:34.399551   63149 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:38:34.399841   63149 config.go:182] Loaded profile config "newest-cni-027190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:38:34.400158   63149 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:38:34.400507   63149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:38:34.440543   63149 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:38:34.441817   63149 start.go:297] selected driver: kvm2
	I0315 07:38:34.441835   63149 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:38:34.441857   63149 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:38:34.442603   63149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:38:34.442687   63149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:38:34.458732   63149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:38:34.458786   63149 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:38:34.459080   63149 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:38:34.459162   63149 cni.go:84] Creating CNI manager for ""
	I0315 07:38:34.459182   63149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:38:34.459194   63149 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:38:34.459275   63149 start.go:340] cluster config:
	{Name:auto-636355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:38:34.459379   63149 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:38:34.462394   63149 out.go:177] * Starting "auto-636355" primary control-plane node in "auto-636355" cluster
	I0315 07:38:34.464049   63149 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:38:34.464095   63149 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 07:38:34.464102   63149 cache.go:56] Caching tarball of preloaded images
	I0315 07:38:34.464194   63149 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:38:34.464211   63149 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 07:38:34.464338   63149 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/auto-636355/config.json ...
	I0315 07:38:34.464361   63149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/auto-636355/config.json: {Name:mk8b274a3a7b5caa87e26e8f61312b7c1e020de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:34.464606   63149 start.go:360] acquireMachinesLock for auto-636355: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:38:34.464659   63149 start.go:364] duration metric: took 28.115µs to acquireMachinesLock for "auto-636355"
	I0315 07:38:34.464683   63149 start.go:93] Provisioning new machine with config: &{Name:auto-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:auto-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:38:34.464750   63149 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:38:33.700310   62652 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:38:33.724415   62652 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:38:33.769881   62652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:38:33.770035   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:33.770036   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-027190 minikube.k8s.io/updated_at=2024_03_15T07_38_33_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=newest-cni-027190 minikube.k8s.io/primary=true
	I0315 07:38:33.817868   62652 ops.go:34] apiserver oom_adj: -16
	I0315 07:38:34.158818   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:34.658968   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:35.158844   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:34.466515   63149 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0315 07:38:34.466693   63149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:38:34.466735   63149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:38:34.482424   63149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0315 07:38:34.482969   63149 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:38:34.483667   63149 main.go:141] libmachine: Using API Version  1
	I0315 07:38:34.483692   63149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:38:34.484066   63149 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:38:34.484327   63149 main.go:141] libmachine: (auto-636355) Calling .GetMachineName
	I0315 07:38:34.484488   63149 main.go:141] libmachine: (auto-636355) Calling .DriverName
	I0315 07:38:34.484707   63149 start.go:159] libmachine.API.Create for "auto-636355" (driver="kvm2")
	I0315 07:38:34.484734   63149 client.go:168] LocalClient.Create starting
	I0315 07:38:34.484762   63149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:38:34.484802   63149 main.go:141] libmachine: Decoding PEM data...
	I0315 07:38:34.484813   63149 main.go:141] libmachine: Parsing certificate...
	I0315 07:38:34.484855   63149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:38:34.484897   63149 main.go:141] libmachine: Decoding PEM data...
	I0315 07:38:34.484908   63149 main.go:141] libmachine: Parsing certificate...
	I0315 07:38:34.484926   63149 main.go:141] libmachine: Running pre-create checks...
	I0315 07:38:34.484935   63149 main.go:141] libmachine: (auto-636355) Calling .PreCreateCheck
	I0315 07:38:34.485393   63149 main.go:141] libmachine: (auto-636355) Calling .GetConfigRaw
	I0315 07:38:34.485863   63149 main.go:141] libmachine: Creating machine...
	I0315 07:38:34.485885   63149 main.go:141] libmachine: (auto-636355) Calling .Create
	I0315 07:38:34.486033   63149 main.go:141] libmachine: (auto-636355) Creating KVM machine...
	I0315 07:38:34.487518   63149 main.go:141] libmachine: (auto-636355) DBG | found existing default KVM network
	I0315 07:38:34.489163   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:34.489011   63171 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002700f0}
	I0315 07:38:34.489205   63149 main.go:141] libmachine: (auto-636355) DBG | created network xml: 
	I0315 07:38:34.489220   63149 main.go:141] libmachine: (auto-636355) DBG | <network>
	I0315 07:38:34.489227   63149 main.go:141] libmachine: (auto-636355) DBG |   <name>mk-auto-636355</name>
	I0315 07:38:34.489245   63149 main.go:141] libmachine: (auto-636355) DBG |   <dns enable='no'/>
	I0315 07:38:34.489252   63149 main.go:141] libmachine: (auto-636355) DBG |   
	I0315 07:38:34.489261   63149 main.go:141] libmachine: (auto-636355) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 07:38:34.489267   63149 main.go:141] libmachine: (auto-636355) DBG |     <dhcp>
	I0315 07:38:34.489275   63149 main.go:141] libmachine: (auto-636355) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 07:38:34.489280   63149 main.go:141] libmachine: (auto-636355) DBG |     </dhcp>
	I0315 07:38:34.489294   63149 main.go:141] libmachine: (auto-636355) DBG |   </ip>
	I0315 07:38:34.489304   63149 main.go:141] libmachine: (auto-636355) DBG |   
	I0315 07:38:34.489316   63149 main.go:141] libmachine: (auto-636355) DBG | </network>
	I0315 07:38:34.489329   63149 main.go:141] libmachine: (auto-636355) DBG | 
	I0315 07:38:34.494802   63149 main.go:141] libmachine: (auto-636355) DBG | trying to create private KVM network mk-auto-636355 192.168.39.0/24...
	I0315 07:38:34.570561   63149 main.go:141] libmachine: (auto-636355) DBG | private KVM network mk-auto-636355 192.168.39.0/24 created
	I0315 07:38:34.570609   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:34.570521   63171 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:38:34.570624   63149 main.go:141] libmachine: (auto-636355) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355 ...
	I0315 07:38:34.570655   63149 main.go:141] libmachine: (auto-636355) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:38:34.570757   63149 main.go:141] libmachine: (auto-636355) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:38:34.814831   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:34.814714   63171 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355/id_rsa...
	I0315 07:38:34.873425   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:34.873288   63171 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355/auto-636355.rawdisk...
	I0315 07:38:34.873490   63149 main.go:141] libmachine: (auto-636355) DBG | Writing magic tar header
	I0315 07:38:34.873506   63149 main.go:141] libmachine: (auto-636355) DBG | Writing SSH key tar header
	I0315 07:38:34.873519   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:34.873403   63171 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355 ...
	I0315 07:38:34.873579   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355
	I0315 07:38:34.873607   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355 (perms=drwx------)
	I0315 07:38:34.873621   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:38:34.873644   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:38:34.873659   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:38:34.873673   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:38:34.873689   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:38:34.873707   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:38:34.873722   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:38:34.873741   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:38:34.873751   63149 main.go:141] libmachine: (auto-636355) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:38:34.873760   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:38:34.873768   63149 main.go:141] libmachine: (auto-636355) Creating domain...
	I0315 07:38:34.873782   63149 main.go:141] libmachine: (auto-636355) DBG | Checking permissions on dir: /home
	I0315 07:38:34.873799   63149 main.go:141] libmachine: (auto-636355) DBG | Skipping /home - not owner
	I0315 07:38:34.874935   63149 main.go:141] libmachine: (auto-636355) define libvirt domain using xml: 
	I0315 07:38:34.874974   63149 main.go:141] libmachine: (auto-636355) <domain type='kvm'>
	I0315 07:38:34.874988   63149 main.go:141] libmachine: (auto-636355)   <name>auto-636355</name>
	I0315 07:38:34.874999   63149 main.go:141] libmachine: (auto-636355)   <memory unit='MiB'>3072</memory>
	I0315 07:38:34.875009   63149 main.go:141] libmachine: (auto-636355)   <vcpu>2</vcpu>
	I0315 07:38:34.875013   63149 main.go:141] libmachine: (auto-636355)   <features>
	I0315 07:38:34.875021   63149 main.go:141] libmachine: (auto-636355)     <acpi/>
	I0315 07:38:34.875025   63149 main.go:141] libmachine: (auto-636355)     <apic/>
	I0315 07:38:34.875030   63149 main.go:141] libmachine: (auto-636355)     <pae/>
	I0315 07:38:34.875040   63149 main.go:141] libmachine: (auto-636355)     
	I0315 07:38:34.875052   63149 main.go:141] libmachine: (auto-636355)   </features>
	I0315 07:38:34.875064   63149 main.go:141] libmachine: (auto-636355)   <cpu mode='host-passthrough'>
	I0315 07:38:34.875072   63149 main.go:141] libmachine: (auto-636355)   
	I0315 07:38:34.875083   63149 main.go:141] libmachine: (auto-636355)   </cpu>
	I0315 07:38:34.875092   63149 main.go:141] libmachine: (auto-636355)   <os>
	I0315 07:38:34.875102   63149 main.go:141] libmachine: (auto-636355)     <type>hvm</type>
	I0315 07:38:34.875111   63149 main.go:141] libmachine: (auto-636355)     <boot dev='cdrom'/>
	I0315 07:38:34.875115   63149 main.go:141] libmachine: (auto-636355)     <boot dev='hd'/>
	I0315 07:38:34.875124   63149 main.go:141] libmachine: (auto-636355)     <bootmenu enable='no'/>
	I0315 07:38:34.875130   63149 main.go:141] libmachine: (auto-636355)   </os>
	I0315 07:38:34.875143   63149 main.go:141] libmachine: (auto-636355)   <devices>
	I0315 07:38:34.875154   63149 main.go:141] libmachine: (auto-636355)     <disk type='file' device='cdrom'>
	I0315 07:38:34.875173   63149 main.go:141] libmachine: (auto-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355/boot2docker.iso'/>
	I0315 07:38:34.875192   63149 main.go:141] libmachine: (auto-636355)       <target dev='hdc' bus='scsi'/>
	I0315 07:38:34.875202   63149 main.go:141] libmachine: (auto-636355)       <readonly/>
	I0315 07:38:34.875206   63149 main.go:141] libmachine: (auto-636355)     </disk>
	I0315 07:38:34.875219   63149 main.go:141] libmachine: (auto-636355)     <disk type='file' device='disk'>
	I0315 07:38:34.875233   63149 main.go:141] libmachine: (auto-636355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:38:34.875249   63149 main.go:141] libmachine: (auto-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/auto-636355/auto-636355.rawdisk'/>
	I0315 07:38:34.875259   63149 main.go:141] libmachine: (auto-636355)       <target dev='hda' bus='virtio'/>
	I0315 07:38:34.875287   63149 main.go:141] libmachine: (auto-636355)     </disk>
	I0315 07:38:34.875347   63149 main.go:141] libmachine: (auto-636355)     <interface type='network'>
	I0315 07:38:34.875378   63149 main.go:141] libmachine: (auto-636355)       <source network='mk-auto-636355'/>
	I0315 07:38:34.875387   63149 main.go:141] libmachine: (auto-636355)       <model type='virtio'/>
	I0315 07:38:34.875395   63149 main.go:141] libmachine: (auto-636355)     </interface>
	I0315 07:38:34.875403   63149 main.go:141] libmachine: (auto-636355)     <interface type='network'>
	I0315 07:38:34.875411   63149 main.go:141] libmachine: (auto-636355)       <source network='default'/>
	I0315 07:38:34.875425   63149 main.go:141] libmachine: (auto-636355)       <model type='virtio'/>
	I0315 07:38:34.875474   63149 main.go:141] libmachine: (auto-636355)     </interface>
	I0315 07:38:34.875494   63149 main.go:141] libmachine: (auto-636355)     <serial type='pty'>
	I0315 07:38:34.875515   63149 main.go:141] libmachine: (auto-636355)       <target port='0'/>
	I0315 07:38:34.875527   63149 main.go:141] libmachine: (auto-636355)     </serial>
	I0315 07:38:34.875545   63149 main.go:141] libmachine: (auto-636355)     <console type='pty'>
	I0315 07:38:34.875557   63149 main.go:141] libmachine: (auto-636355)       <target type='serial' port='0'/>
	I0315 07:38:34.875568   63149 main.go:141] libmachine: (auto-636355)     </console>
	I0315 07:38:34.875579   63149 main.go:141] libmachine: (auto-636355)     <rng model='virtio'>
	I0315 07:38:34.875590   63149 main.go:141] libmachine: (auto-636355)       <backend model='random'>/dev/random</backend>
	I0315 07:38:34.875605   63149 main.go:141] libmachine: (auto-636355)     </rng>
	I0315 07:38:34.875618   63149 main.go:141] libmachine: (auto-636355)     
	I0315 07:38:34.875629   63149 main.go:141] libmachine: (auto-636355)     
	I0315 07:38:34.875641   63149 main.go:141] libmachine: (auto-636355)   </devices>
	I0315 07:38:34.875649   63149 main.go:141] libmachine: (auto-636355) </domain>
	I0315 07:38:34.875663   63149 main.go:141] libmachine: (auto-636355) 
	I0315 07:38:34.879802   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:8a:9f:c3 in network default
	I0315 07:38:34.880417   63149 main.go:141] libmachine: (auto-636355) Ensuring networks are active...
	I0315 07:38:34.880455   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:34.881253   63149 main.go:141] libmachine: (auto-636355) Ensuring network default is active
	I0315 07:38:34.881625   63149 main.go:141] libmachine: (auto-636355) Ensuring network mk-auto-636355 is active
	I0315 07:38:34.882256   63149 main.go:141] libmachine: (auto-636355) Getting domain xml...
	I0315 07:38:34.883204   63149 main.go:141] libmachine: (auto-636355) Creating domain...
	I0315 07:38:36.172719   63149 main.go:141] libmachine: (auto-636355) Waiting to get IP...
	I0315 07:38:36.173677   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:36.174233   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:36.174262   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:36.174184   63171 retry.go:31] will retry after 238.291745ms: waiting for machine to come up
	I0315 07:38:36.413501   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:36.414002   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:36.414044   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:36.413968   63171 retry.go:31] will retry after 238.881075ms: waiting for machine to come up
	I0315 07:38:36.654694   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:36.655139   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:36.655192   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:36.655118   63171 retry.go:31] will retry after 326.086073ms: waiting for machine to come up
	I0315 07:38:36.982618   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:36.983168   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:36.983222   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:36.983141   63171 retry.go:31] will retry after 447.832058ms: waiting for machine to come up
	I0315 07:38:37.432804   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:37.433365   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:37.433386   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:37.433275   63171 retry.go:31] will retry after 617.38552ms: waiting for machine to come up
	I0315 07:38:38.051953   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:38.052504   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:38.052530   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:38.052452   63171 retry.go:31] will retry after 732.622415ms: waiting for machine to come up
	I0315 07:38:38.786212   63149 main.go:141] libmachine: (auto-636355) DBG | domain auto-636355 has defined MAC address 52:54:00:97:a7:9a in network mk-auto-636355
	I0315 07:38:38.786698   63149 main.go:141] libmachine: (auto-636355) DBG | unable to find current IP address of domain auto-636355 in network mk-auto-636355
	I0315 07:38:38.786744   63149 main.go:141] libmachine: (auto-636355) DBG | I0315 07:38:38.786654   63171 retry.go:31] will retry after 1.083467507s: waiting for machine to come up
	I0315 07:38:35.659008   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:36.159907   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:36.659064   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:37.159733   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:37.659066   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:38.159383   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:38.659179   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:39.159417   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:39.658953   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:38:40.159058   62652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.103659772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488321103594230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08a5a7a8-15df-4a7d-a30a-3cdfff2e438a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.104440886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe28adef-4b64-4bd5-948b-a0e80853c0ed name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.104516513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe28adef-4b64-4bd5-948b-a0e80853c0ed name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.104695251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe28adef-4b64-4bd5-948b-a0e80853c0ed name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.156241925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=207bf4e3-e3a4-4e89-99f8-25ea37b00591 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.156330213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=207bf4e3-e3a4-4e89-99f8-25ea37b00591 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.158286607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=370b85b3-a567-48af-8164-4a634b778fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.159066463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488321159030043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=370b85b3-a567-48af-8164-4a634b778fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.159819917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c8ee5ae-3907-4e8c-8193-cd1d86641448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.159892877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c8ee5ae-3907-4e8c-8193-cd1d86641448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.160241905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c8ee5ae-3907-4e8c-8193-cd1d86641448 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.209791404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d1cd917-f8f6-40eb-982b-0854a7324648 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.209921007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d1cd917-f8f6-40eb-982b-0854a7324648 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.211538278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bd85fc9-c5ae-4514-b4b5-4be283b8a8c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.212399919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488321212358548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bd85fc9-c5ae-4514-b4b5-4be283b8a8c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.213221757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8391f51-71bc-48c9-9060-0d17e6aba572 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.213293012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8391f51-71bc-48c9-9060-0d17e6aba572 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.213591533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8391f51-71bc-48c9-9060-0d17e6aba572 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.257511009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c97371d8-d74c-4cee-b37c-dcb0f39768a3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.257648271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c97371d8-d74c-4cee-b37c-dcb0f39768a3 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.259364507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7d87344-fd65-465a-98dd-76c2951a44a0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.260068538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488321260026683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7d87344-fd65-465a-98dd-76c2951a44a0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.260855602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cbcb33a-ff7a-413d-8b4f-0ba8fb39c8be name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.261004224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cbcb33a-ff7a-413d-8b4f-0ba8fb39c8be name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:41 default-k8s-diff-port-128870 crio[692]: time="2024-03-15 07:38:41.261274808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01,PodSandboxId:d8dfe67e86c229a338870252125d65638add7717ed4ec6f8943fc9670492c98e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487403130430903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b5a36e-b3cd-4258-8e18-8efc850a2bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5de8748d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146,PodSandboxId:ea4bb26b63178dcb6428d5a077e1b8e29977fd12819d57b8e8b24b8fde1eee60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710487402627622562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97bfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05d184b-c67c-43f2-8de4-1d170725deb3,},Annotations:map[string]string{io.kubernetes.container.hash: b3b18af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e,PodSandboxId:0a53fb2b72649d71306b34696ebd535ee0afacbcbb63137c442892b10a2e1d83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402523248414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gtx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acdf9648-b6c1-4427-9264-7b1b9c770690,},Annotations:map[string]string{io.kubernetes.container.hash: a6e5be0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc,PodSandboxId:07cf2940e19d6a927127bc691a4005ff834203ace2e087012021b0d55ce1fcb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487402429765020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4g87j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba0fa41-99fc-40bb-b877-
70017d0573c6,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8239f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb,PodSandboxId:f983f8bb37d97cc865813d276b69edc0acc8f3ba63bf6a47853105ec5872c203,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171048738159952019
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4810e866d6b6f3f1c4a648dc9090ff84,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a,PodSandboxId:94a97656f4e95378819a645ff7a81e98f293cfa917348074285399d496c8152f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487381546064554,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56da1ec4c0bf57d93de1846309a5d93f,},Annotations:map[string]string{io.kubernetes.container.hash: c496a33,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3,PodSandboxId:a50e57b625a5459a345586befd91c4e2140b580d2b435858beac8c11f21a220a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487381511895056,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf56a2d7cc82950d7802fc2dd863a044,},Annotations:map[string]string{io.kubernetes.container.hash: 20c61c28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e,PodSandboxId:644af00947fb2908ff3f6932c1aa02d853439bd3e5838292718576354a509056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487381477056460,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-128870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf7f2a648845bf419adbf8c95d8dbf86,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cbcb33a-ff7a-413d-8b4f-0ba8fb39c8be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61f7b2f15345f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d8dfe67e86c22       storage-provisioner
	e8a70cf1fab35       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   ea4bb26b63178       kube-proxy-97bfn
	4d71da0a84bc1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   0a53fb2b72649       coredns-5dd5756b68-5gtx2
	261827033d961       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   07cf2940e19d6       coredns-5dd5756b68-4g87j
	468a9df4ca260       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   f983f8bb37d97       kube-scheduler-default-k8s-diff-port-128870
	be192990cd7f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   94a97656f4e95       etcd-default-k8s-diff-port-128870
	88b5ef91f5aff       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   a50e57b625a54       kube-apiserver-default-k8s-diff-port-128870
	e191efaaf507a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   644af00947fb2       kube-controller-manager-default-k8s-diff-port-128870
	
	
	==> coredns [261827033d961ec031de3bcff49d107acd432ebac0518665cfa23ef7404c08bc] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [4d71da0a84bc178ed442a5ab963df91c2d261664110fdd0895afdb36990c295e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-128870
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-128870
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=default-k8s-diff-port-128870
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:23:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-128870
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:38:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:33:39 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:33:39 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:33:39 +0000   Fri, 15 Mar 2024 07:23:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:33:39 +0000   Fri, 15 Mar 2024 07:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.123
	  Hostname:    default-k8s-diff-port-128870
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8ea27385ac541ca83767e82a1f9ffde
	  System UUID:                f8ea2738-5ac5-41ca-8376-7e82a1f9ffde
	  Boot ID:                    753fbe63-8d97-4300-8c5c-eafbaec56475
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4g87j                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-5gtx2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-128870                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-128870             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-128870    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-97bfn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-128870             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-59mcw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-128870 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-128870 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-128870 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-128870 event: Registered Node default-k8s-diff-port-128870 in Controller
	
	
	==> dmesg <==
	[  +0.052933] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528820] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.813920] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.633903] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar15 07:18] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.059279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065142] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.257862] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.138155] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.254923] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.165236] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +0.069907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.921342] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +6.354924] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.613664] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 07:22] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.538143] systemd-fstab-generator[3374]: Ignoring "noauto" option for root device
	[Mar15 07:23] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.524420] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[ +12.930505] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.109819] kauditd_printk_skb: 14 callbacks suppressed
	[Mar15 07:24] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [be192990cd7f7599504740b674d065c5d68e50ff8471f7009e29864c39606f9a] <==
	{"level":"info","ts":"2024-03-15T07:23:01.910366Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9e93ed333c2c6154","local-member-id":"2472baf7c187d","added-peer-id":"2472baf7c187d","added-peer-peer-urls":["https://192.168.50.123:2380"]}
	{"level":"info","ts":"2024-03-15T07:23:01.948998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.949351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.94947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d received MsgPreVoteResp from 2472baf7c187d at term 1"}
	{"level":"info","ts":"2024-03-15T07:23:01.949509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d received MsgVoteResp from 2472baf7c187d at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2472baf7c187d became leader at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.949642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2472baf7c187d elected leader 2472baf7c187d at term 2"}
	{"level":"info","ts":"2024-03-15T07:23:01.951314Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2472baf7c187d","local-member-attributes":"{Name:default-k8s-diff-port-128870 ClientURLs:[https://192.168.50.123:2379]}","request-path":"/0/members/2472baf7c187d/attributes","cluster-id":"9e93ed333c2c6154","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:23:01.951722Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.951865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:23:01.960848Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:23:01.964011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:23:01.963067Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:23:01.964135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:23:01.965085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.123:2379"}
	{"level":"info","ts":"2024-03-15T07:23:01.965613Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9e93ed333c2c6154","local-member-id":"2472baf7c187d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.96964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:23:01.969763Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:33:02.374763Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-03-15T07:33:02.37784Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":720,"took":"1.855857ms","hash":4056918777}
	{"level":"info","ts":"2024-03-15T07:33:02.378155Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4056918777,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2024-03-15T07:38:02.383271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-03-15T07:38:02.385717Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":963,"took":"1.531418ms","hash":1669133652}
	{"level":"info","ts":"2024-03-15T07:38:02.385804Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1669133652,"revision":963,"compact-revision":720}
	
	
	==> kernel <==
	 07:38:41 up 20 min,  0 users,  load average: 0.23, 0.25, 0.26
	Linux default-k8s-diff-port-128870 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [88b5ef91f5aff7826ce193a230964dbca530b2cef58d4da0d9816618ed70baf3] <==
	E0315 07:34:05.155758       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:34:05.155766       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:35:04.023736       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:36:04.023629       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:36:05.154544       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:36:05.154665       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:36:05.154694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:36:05.162669       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:36:05.162785       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:36:05.162815       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:37:04.023853       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:38:04.024148       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:38:04.159421       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:38:04.159545       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:38:04.160075       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:38:05.159713       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:38:05.159805       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:38:05.159814       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:38:05.159906       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:38:05.159926       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:38:05.161254       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e191efaaf507a7cbc37880dd20bbed56929baee8cdab04b3c9de485129cc8d8e] <==
	I0315 07:32:50.579902       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:33:20.089595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:33:20.589766       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:33:50.095441       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:33:50.599174       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:34:17.851145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="307.198µs"
	E0315 07:34:20.102236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:34:20.608174       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:34:30.847226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="131.453µs"
	E0315 07:34:50.107867       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:34:50.617427       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:35:20.113812       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:35:20.628319       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:35:50.120387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:35:50.637865       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:20.126221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:20.649143       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:50.131858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:50.661180       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:20.141522       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:20.671196       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:50.148718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:50.684568       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:38:20.155629       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:38:20.695664       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e8a70cf1fab3572c35f50ba9b126139dec81e6d332475279e501803ede8fa146] <==
	I0315 07:23:23.177069       1 server_others.go:69] "Using iptables proxy"
	I0315 07:23:23.224915       1 node.go:141] Successfully retrieved node IP: 192.168.50.123
	I0315 07:23:23.319027       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 07:23:23.319075       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:23:23.322485       1 server_others.go:152] "Using iptables Proxier"
	I0315 07:23:23.323180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:23:23.323829       1 server.go:846] "Version info" version="v1.28.4"
	I0315 07:23:23.323874       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:23:23.325164       1 config.go:188] "Starting service config controller"
	I0315 07:23:23.325595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:23:23.325717       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:23:23.325813       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:23:23.328004       1 config.go:315] "Starting node config controller"
	I0315 07:23:23.328035       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:23:23.426231       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:23:23.426239       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:23:23.428794       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [468a9df4ca260fc06d7aaebd5d35d96b94bdb42aa4875edd683111e9eebe92cb] <==
	W0315 07:23:04.183042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:04.183080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.099556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:23:05.099676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 07:23:05.122085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 07:23:05.122448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 07:23:05.163142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.163191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.194582       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:23:05.194637       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:23:05.235726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:23:05.235820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 07:23:05.339008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:23:05.339035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 07:23:05.456594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.456644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.518152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 07:23:05.518198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 07:23:05.531363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:23:05.531436       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 07:23:05.547100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 07:23:05.547206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 07:23:05.552195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 07:23:05.552246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0315 07:23:07.070190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:36:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:36:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:36:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:36:11 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:36:11.830343    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:36:25 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:36:25.830498    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:36:38 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:36:38.830326    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:36:53 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:36:53.831934    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:37:04 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:04.830091    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:37:07 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:07.962591    3702 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:37:07 default-k8s-diff-port-128870 kubelet[3702]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:37:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:37:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:37:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:37:16 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:16.830814    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:37:29 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:29.831640    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:37:43 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:43.831229    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:37:57 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:37:57.831132    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:38:07 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:38:07.971668    3702 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:38:07 default-k8s-diff-port-128870 kubelet[3702]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:38:07 default-k8s-diff-port-128870 kubelet[3702]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:38:07 default-k8s-diff-port-128870 kubelet[3702]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:38:07 default-k8s-diff-port-128870 kubelet[3702]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:38:11 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:38:11.834255    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:38:22 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:38:22.830412    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	Mar 15 07:38:33 default-k8s-diff-port-128870 kubelet[3702]: E0315 07:38:33.831473    3702 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-59mcw" podUID="da87c104-6961-4bb9-9fa3-b8bb104e2832"
	
	
	==> storage-provisioner [61f7b2f15345f6b02d026cbb2fc9e938c13198340162e87e065f7c9f6a643c01] <==
	I0315 07:23:23.257799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:23:23.278290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:23:23.278582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:23:23.295526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:23:23.298358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63!
	I0315 07:23:23.298576       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"093a883c-531b-45ef-aa8e-3f41d4f9810b", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63 became leader
	I0315 07:23:23.398654       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-128870_b6895069-bad8-4696-b5dd-e10cbc446b63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-59mcw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw: exit status 1 (70.53527ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-59mcw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-128870 describe pod metrics-server-57f55c9bc5-59mcw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (371.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (544.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184055 -n no-preload-184055
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:41:42.171752595 +0000 UTC m=+6347.175461247
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-184055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-184055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (64.899398ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-184055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
E0315 07:41:42.261284   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-184055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-184055 logs -n 25: (2.682285283s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-636355 sudo systemctl                        | auto-636355               | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-636355 sudo find                             | auto-636355               | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p auto-636355 sudo crio                             | auto-636355               | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| delete  | -p auto-636355                                       | auto-636355               | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo docker                        | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| start   | -p custom-flannel-636355                             | custom-flannel-636355     | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo cat                           | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo                               | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo find                          | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-636355 sudo crio                          | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-636355                                    | kindnet-636355            | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC | 15 Mar 24 07:40 UTC |
	| start   | -p enable-default-cni-636355                         | enable-default-cni-636355 | jenkins | v1.32.0 | 15 Mar 24 07:40 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:40:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:40:44.045205   67845 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:40:44.045489   67845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:40:44.045499   67845 out.go:304] Setting ErrFile to fd 2...
	I0315 07:40:44.045504   67845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:40:44.045773   67845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:40:44.046452   67845 out.go:298] Setting JSON to false
	I0315 07:40:44.047691   67845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8540,"bootTime":1710479904,"procs":342,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:40:44.047782   67845 start.go:139] virtualization: kvm guest
	I0315 07:40:44.050723   67845 out.go:177] * [enable-default-cni-636355] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:40:44.052283   67845 notify.go:220] Checking for updates...
	I0315 07:40:44.052298   67845 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:40:44.053829   67845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:40:44.055130   67845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:40:44.056390   67845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:40:44.057635   67845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:40:44.058954   67845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:40:44.060774   67845 config.go:182] Loaded profile config "calico-636355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:40:44.060924   67845 config.go:182] Loaded profile config "custom-flannel-636355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:40:44.061084   67845 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:40:44.061197   67845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:40:44.103269   67845 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:40:44.105035   67845 start.go:297] selected driver: kvm2
	I0315 07:40:44.105057   67845 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:40:44.105068   67845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:40:44.105757   67845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:40:44.105844   67845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:40:44.122084   67845 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:40:44.122143   67845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0315 07:40:44.122324   67845 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0315 07:40:44.122347   67845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:40:44.122409   67845 cni.go:84] Creating CNI manager for "bridge"
	I0315 07:40:44.122426   67845 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:40:44.122487   67845 start.go:340] cluster config:
	{Name:enable-default-cni-636355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:40:44.122596   67845 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:40:44.124533   67845 out.go:177] * Starting "enable-default-cni-636355" primary control-plane node in "enable-default-cni-636355" cluster
	I0315 07:40:40.113648   67253 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0315 07:40:40.113847   67253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:40:40.113898   67253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:40:40.134619   67253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0315 07:40:40.135101   67253 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:40:40.135747   67253 main.go:141] libmachine: Using API Version  1
	I0315 07:40:40.135783   67253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:40:40.136239   67253 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:40:40.136488   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetMachineName
	I0315 07:40:40.136647   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:40:40.136844   67253 start.go:159] libmachine.API.Create for "custom-flannel-636355" (driver="kvm2")
	I0315 07:40:40.136874   67253 client.go:168] LocalClient.Create starting
	I0315 07:40:40.136913   67253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:40:40.136957   67253 main.go:141] libmachine: Decoding PEM data...
	I0315 07:40:40.136981   67253 main.go:141] libmachine: Parsing certificate...
	I0315 07:40:40.137062   67253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:40:40.137093   67253 main.go:141] libmachine: Decoding PEM data...
	I0315 07:40:40.137113   67253 main.go:141] libmachine: Parsing certificate...
	I0315 07:40:40.137137   67253 main.go:141] libmachine: Running pre-create checks...
	I0315 07:40:40.137154   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .PreCreateCheck
	I0315 07:40:40.137492   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetConfigRaw
	I0315 07:40:40.137885   67253 main.go:141] libmachine: Creating machine...
	I0315 07:40:40.137898   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .Create
	I0315 07:40:40.138049   67253 main.go:141] libmachine: (custom-flannel-636355) Creating KVM machine...
	I0315 07:40:40.139641   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found existing default KVM network
	I0315 07:40:40.141356   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:40.141213   67410 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0315 07:40:40.141382   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | created network xml: 
	I0315 07:40:40.141395   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | <network>
	I0315 07:40:40.141404   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   <name>mk-custom-flannel-636355</name>
	I0315 07:40:40.141413   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   <dns enable='no'/>
	I0315 07:40:40.141422   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   
	I0315 07:40:40.141433   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 07:40:40.141440   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |     <dhcp>
	I0315 07:40:40.141450   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 07:40:40.141462   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |     </dhcp>
	I0315 07:40:40.141489   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   </ip>
	I0315 07:40:40.141515   67253 main.go:141] libmachine: (custom-flannel-636355) DBG |   
	I0315 07:40:40.141527   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | </network>
	I0315 07:40:40.141540   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | 
	I0315 07:40:40.147738   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | trying to create private KVM network mk-custom-flannel-636355 192.168.39.0/24...
	I0315 07:40:40.226451   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | private KVM network mk-custom-flannel-636355 192.168.39.0/24 created
	I0315 07:40:40.226483   67253 main.go:141] libmachine: (custom-flannel-636355) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355 ...
	I0315 07:40:40.226497   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:40.226423   67410 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:40:40.226527   67253 main.go:141] libmachine: (custom-flannel-636355) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:40:40.226546   67253 main.go:141] libmachine: (custom-flannel-636355) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:40:40.487736   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:40.487612   67410 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa...
	I0315 07:40:40.850064   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:40.849921   67410 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/custom-flannel-636355.rawdisk...
	I0315 07:40:40.850097   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Writing magic tar header
	I0315 07:40:40.850121   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Writing SSH key tar header
	I0315 07:40:40.850137   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:40.850100   67410 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355 ...
	I0315 07:40:40.850244   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355
	I0315 07:40:40.850277   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:40:40.850291   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:40:40.850309   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:40:40.850331   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355 (perms=drwx------)
	I0315 07:40:40.850348   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:40:40.850355   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:40:40.850370   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:40:40.850385   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:40:40.850399   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:40:40.850411   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:40:40.850427   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Checking permissions on dir: /home
	I0315 07:40:40.850435   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Skipping /home - not owner
	I0315 07:40:40.850453   67253 main.go:141] libmachine: (custom-flannel-636355) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:40:40.850464   67253 main.go:141] libmachine: (custom-flannel-636355) Creating domain...
	I0315 07:40:40.851587   67253 main.go:141] libmachine: (custom-flannel-636355) define libvirt domain using xml: 
	I0315 07:40:40.851611   67253 main.go:141] libmachine: (custom-flannel-636355) <domain type='kvm'>
	I0315 07:40:40.851632   67253 main.go:141] libmachine: (custom-flannel-636355)   <name>custom-flannel-636355</name>
	I0315 07:40:40.851640   67253 main.go:141] libmachine: (custom-flannel-636355)   <memory unit='MiB'>3072</memory>
	I0315 07:40:40.851649   67253 main.go:141] libmachine: (custom-flannel-636355)   <vcpu>2</vcpu>
	I0315 07:40:40.851662   67253 main.go:141] libmachine: (custom-flannel-636355)   <features>
	I0315 07:40:40.851685   67253 main.go:141] libmachine: (custom-flannel-636355)     <acpi/>
	I0315 07:40:40.851709   67253 main.go:141] libmachine: (custom-flannel-636355)     <apic/>
	I0315 07:40:40.851718   67253 main.go:141] libmachine: (custom-flannel-636355)     <pae/>
	I0315 07:40:40.851733   67253 main.go:141] libmachine: (custom-flannel-636355)     
	I0315 07:40:40.851759   67253 main.go:141] libmachine: (custom-flannel-636355)   </features>
	I0315 07:40:40.851771   67253 main.go:141] libmachine: (custom-flannel-636355)   <cpu mode='host-passthrough'>
	I0315 07:40:40.851782   67253 main.go:141] libmachine: (custom-flannel-636355)   
	I0315 07:40:40.851792   67253 main.go:141] libmachine: (custom-flannel-636355)   </cpu>
	I0315 07:40:40.851801   67253 main.go:141] libmachine: (custom-flannel-636355)   <os>
	I0315 07:40:40.851812   67253 main.go:141] libmachine: (custom-flannel-636355)     <type>hvm</type>
	I0315 07:40:40.851824   67253 main.go:141] libmachine: (custom-flannel-636355)     <boot dev='cdrom'/>
	I0315 07:40:40.851833   67253 main.go:141] libmachine: (custom-flannel-636355)     <boot dev='hd'/>
	I0315 07:40:40.851879   67253 main.go:141] libmachine: (custom-flannel-636355)     <bootmenu enable='no'/>
	I0315 07:40:40.851892   67253 main.go:141] libmachine: (custom-flannel-636355)   </os>
	I0315 07:40:40.851900   67253 main.go:141] libmachine: (custom-flannel-636355)   <devices>
	I0315 07:40:40.851907   67253 main.go:141] libmachine: (custom-flannel-636355)     <disk type='file' device='cdrom'>
	I0315 07:40:40.851920   67253 main.go:141] libmachine: (custom-flannel-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/boot2docker.iso'/>
	I0315 07:40:40.851931   67253 main.go:141] libmachine: (custom-flannel-636355)       <target dev='hdc' bus='scsi'/>
	I0315 07:40:40.851939   67253 main.go:141] libmachine: (custom-flannel-636355)       <readonly/>
	I0315 07:40:40.851948   67253 main.go:141] libmachine: (custom-flannel-636355)     </disk>
	I0315 07:40:40.851958   67253 main.go:141] libmachine: (custom-flannel-636355)     <disk type='file' device='disk'>
	I0315 07:40:40.851971   67253 main.go:141] libmachine: (custom-flannel-636355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:40:40.851996   67253 main.go:141] libmachine: (custom-flannel-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/custom-flannel-636355.rawdisk'/>
	I0315 07:40:40.852008   67253 main.go:141] libmachine: (custom-flannel-636355)       <target dev='hda' bus='virtio'/>
	I0315 07:40:40.852019   67253 main.go:141] libmachine: (custom-flannel-636355)     </disk>
	I0315 07:40:40.852028   67253 main.go:141] libmachine: (custom-flannel-636355)     <interface type='network'>
	I0315 07:40:40.852072   67253 main.go:141] libmachine: (custom-flannel-636355)       <source network='mk-custom-flannel-636355'/>
	I0315 07:40:40.852099   67253 main.go:141] libmachine: (custom-flannel-636355)       <model type='virtio'/>
	I0315 07:40:40.852111   67253 main.go:141] libmachine: (custom-flannel-636355)     </interface>
	I0315 07:40:40.852119   67253 main.go:141] libmachine: (custom-flannel-636355)     <interface type='network'>
	I0315 07:40:40.852128   67253 main.go:141] libmachine: (custom-flannel-636355)       <source network='default'/>
	I0315 07:40:40.852136   67253 main.go:141] libmachine: (custom-flannel-636355)       <model type='virtio'/>
	I0315 07:40:40.852143   67253 main.go:141] libmachine: (custom-flannel-636355)     </interface>
	I0315 07:40:40.852150   67253 main.go:141] libmachine: (custom-flannel-636355)     <serial type='pty'>
	I0315 07:40:40.852159   67253 main.go:141] libmachine: (custom-flannel-636355)       <target port='0'/>
	I0315 07:40:40.852166   67253 main.go:141] libmachine: (custom-flannel-636355)     </serial>
	I0315 07:40:40.852175   67253 main.go:141] libmachine: (custom-flannel-636355)     <console type='pty'>
	I0315 07:40:40.852183   67253 main.go:141] libmachine: (custom-flannel-636355)       <target type='serial' port='0'/>
	I0315 07:40:40.852192   67253 main.go:141] libmachine: (custom-flannel-636355)     </console>
	I0315 07:40:40.852199   67253 main.go:141] libmachine: (custom-flannel-636355)     <rng model='virtio'>
	I0315 07:40:40.852209   67253 main.go:141] libmachine: (custom-flannel-636355)       <backend model='random'>/dev/random</backend>
	I0315 07:40:40.852217   67253 main.go:141] libmachine: (custom-flannel-636355)     </rng>
	I0315 07:40:40.852225   67253 main.go:141] libmachine: (custom-flannel-636355)     
	I0315 07:40:40.852236   67253 main.go:141] libmachine: (custom-flannel-636355)     
	I0315 07:40:40.852245   67253 main.go:141] libmachine: (custom-flannel-636355)   </devices>
	I0315 07:40:40.852252   67253 main.go:141] libmachine: (custom-flannel-636355) </domain>
	I0315 07:40:40.852262   67253 main.go:141] libmachine: (custom-flannel-636355) 
	I0315 07:40:40.857896   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:4c:4c:40 in network default
	I0315 07:40:40.858671   67253 main.go:141] libmachine: (custom-flannel-636355) Ensuring networks are active...
	I0315 07:40:40.858711   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:40.859507   67253 main.go:141] libmachine: (custom-flannel-636355) Ensuring network default is active
	I0315 07:40:40.859915   67253 main.go:141] libmachine: (custom-flannel-636355) Ensuring network mk-custom-flannel-636355 is active
	I0315 07:40:40.860842   67253 main.go:141] libmachine: (custom-flannel-636355) Getting domain xml...
	I0315 07:40:40.864961   67253 main.go:141] libmachine: (custom-flannel-636355) Creating domain...
	I0315 07:40:42.404660   67253 main.go:141] libmachine: (custom-flannel-636355) Waiting to get IP...
	I0315 07:40:42.407299   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:42.408630   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:42.408659   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:42.408591   67410 retry.go:31] will retry after 264.672564ms: waiting for machine to come up
	I0315 07:40:42.675118   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:42.675812   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:42.675845   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:42.675724   67410 retry.go:31] will retry after 235.624451ms: waiting for machine to come up
	I0315 07:40:42.913539   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:42.917182   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:42.917218   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:42.917127   67410 retry.go:31] will retry after 368.581595ms: waiting for machine to come up
	I0315 07:40:43.732484   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:43.733311   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:43.733336   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:43.733231   67410 retry.go:31] will retry after 385.674948ms: waiting for machine to come up
	I0315 07:40:44.121197   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:44.121728   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:44.121758   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:44.121679   67410 retry.go:31] will retry after 700.469493ms: waiting for machine to come up
	I0315 07:40:43.918987   64861 crio.go:444] duration metric: took 2.051456377s to copy over tarball
	I0315 07:40:43.919069   64861 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:40:46.824160   64861 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.905058131s)
	I0315 07:40:46.824190   64861 crio.go:451] duration metric: took 2.905176775s to extract the tarball
	I0315 07:40:46.824200   64861 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:40:46.870727   64861 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:40:46.917341   64861 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:40:46.917371   64861 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:40:46.917379   64861 kubeadm.go:928] updating node { 192.168.61.29 8443 v1.28.4 crio true true} ...
	I0315 07:40:46.917496   64861 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-636355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0315 07:40:46.917587   64861 ssh_runner.go:195] Run: crio config
	I0315 07:40:46.977577   64861 cni.go:84] Creating CNI manager for "calico"
	I0315 07:40:46.977602   64861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:40:46.977625   64861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.29 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-636355 NodeName:calico-636355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:40:46.977759   64861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-636355"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:40:46.977819   64861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:40:46.990776   64861 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:40:46.990854   64861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:40:47.002892   64861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 07:40:47.024828   64861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:40:47.048557   64861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0315 07:40:47.070248   64861 ssh_runner.go:195] Run: grep 192.168.61.29	control-plane.minikube.internal$ /etc/hosts
	I0315 07:40:47.074564   64861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:40:47.089639   64861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:40:47.206909   64861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:40:47.226733   64861 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355 for IP: 192.168.61.29
	I0315 07:40:47.226781   64861 certs.go:194] generating shared ca certs ...
	I0315 07:40:47.226803   64861 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.226983   64861 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:40:47.227066   64861 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:40:47.227080   64861 certs.go:256] generating profile certs ...
	I0315 07:40:47.227149   64861 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.key
	I0315 07:40:47.227168   64861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.crt with IP's: []
	I0315 07:40:47.474322   64861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.crt ...
	I0315 07:40:47.474348   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.crt: {Name:mka516f57bb7a77df025d14515d52e302e9b96f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.474525   64861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.key ...
	I0315 07:40:47.474542   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/client.key: {Name:mkc97ece92ce31c3890b1528684271219c5f8a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.474651   64861 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key.d408263a
	I0315 07:40:47.474675   64861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt.d408263a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.29]
	I0315 07:40:47.758798   64861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt.d408263a ...
	I0315 07:40:47.758828   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt.d408263a: {Name:mk185e09bbe4ac47fc1bf2982ab2b92940ff200b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.759012   64861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key.d408263a ...
	I0315 07:40:47.759028   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key.d408263a: {Name:mka21974fcdf4a17e4520a0d196d2069383305c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.759126   64861 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt.d408263a -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt
	I0315 07:40:47.759218   64861 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key.d408263a -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key
	I0315 07:40:47.759276   64861 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.key
	I0315 07:40:47.759304   64861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.crt with IP's: []
	I0315 07:40:47.883297   64861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.crt ...
	I0315 07:40:47.883324   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.crt: {Name:mkb18a276c4159acab7b61e008781c9ead537672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.934754   64861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.key ...
	I0315 07:40:47.934789   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.key: {Name:mk2287a9148890023ff0fb0df610697a54cbc870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:47.935038   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:40:47.935082   64861 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:40:47.935098   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:40:47.935137   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:40:47.935178   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:40:47.935209   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:40:47.935269   64861 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:40:47.935986   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:40:47.980981   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:40:48.029241   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:40:48.063636   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:40:48.091966   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0315 07:40:48.120427   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:40:48.148457   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:40:48.174839   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/calico-636355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:40:48.202269   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:40:48.234240   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:40:48.265783   64861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:40:48.293650   64861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:40:48.313982   64861 ssh_runner.go:195] Run: openssl version
	I0315 07:40:48.321000   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:40:48.335261   64861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:40:48.340927   64861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:40:48.341011   64861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:40:48.347923   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:40:48.363099   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:40:48.376224   64861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:40:48.381314   64861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:40:48.381396   64861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:40:48.388075   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:40:48.400720   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:40:48.413488   64861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:40:48.418930   64861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:40:48.418994   64861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:40:48.425342   64861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:40:48.438450   64861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:40:48.443605   64861 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:40:48.443665   64861 kubeadm.go:391] StartCluster: {Name:calico-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:calico-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:40:48.443757   64861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:40:48.443807   64861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:40:48.484867   64861 cri.go:89] found id: ""
	I0315 07:40:48.484941   64861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:40:48.496975   64861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:40:48.508363   64861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:40:48.519845   64861 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:40:48.519868   64861 kubeadm.go:156] found existing configuration files:
	
	I0315 07:40:48.519924   64861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:40:48.530855   64861 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:40:48.530923   64861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:40:48.549441   64861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:40:48.563256   64861 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:40:48.563317   64861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:40:48.574939   64861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:40:48.585583   64861 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:40:48.585672   64861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:40:48.596519   64861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:40:48.607193   64861 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:40:48.607242   64861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:40:48.618576   64861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:40:48.680247   64861 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:40:48.680389   64861 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:40:44.125885   67845 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:40:44.125931   67845 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 07:40:44.125942   67845 cache.go:56] Caching tarball of preloaded images
	I0315 07:40:44.126027   67845 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:40:44.126036   67845 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 07:40:44.126148   67845 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/enable-default-cni-636355/config.json ...
	I0315 07:40:44.126175   67845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/enable-default-cni-636355/config.json: {Name:mk541abb5ef5044c988257a8ad95ad38ad0a16ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:40:44.126316   67845 start.go:360] acquireMachinesLock for enable-default-cni-636355: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:40:48.859531   64861 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:40:48.859678   64861 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:40:48.859807   64861 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:40:49.112314   64861 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:40:44.823654   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:44.824487   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:44.824517   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:44.824406   67410 retry.go:31] will retry after 572.940087ms: waiting for machine to come up
	I0315 07:40:45.398870   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:45.399390   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:45.399449   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:45.399338   67410 retry.go:31] will retry after 1.138282643s: waiting for machine to come up
	I0315 07:40:46.539923   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:46.540499   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:46.540524   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:46.540449   67410 retry.go:31] will retry after 954.02037ms: waiting for machine to come up
	I0315 07:40:47.495498   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:47.495916   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:47.495947   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:47.495876   67410 retry.go:31] will retry after 1.5535427s: waiting for machine to come up
	I0315 07:40:49.051335   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:49.051917   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:49.051975   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:49.051906   67410 retry.go:31] will retry after 1.560066011s: waiting for machine to come up
	I0315 07:40:49.197900   64861 out.go:204]   - Generating certificates and keys ...
	I0315 07:40:49.198055   64861 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:40:49.198165   64861 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:40:49.233430   64861 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:40:49.299518   64861 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:40:49.612654   64861 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:40:49.911478   64861 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:40:50.091548   64861 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:40:50.091848   64861 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-636355 localhost] and IPs [192.168.61.29 127.0.0.1 ::1]
	I0315 07:40:50.230267   64861 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:40:50.230551   64861 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-636355 localhost] and IPs [192.168.61.29 127.0.0.1 ::1]
	I0315 07:40:50.485828   64861 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:40:50.627039   64861 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:40:50.673521   64861 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:40:50.673778   64861 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:40:50.768274   64861 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:40:51.035136   64861 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:40:51.224774   64861 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:40:51.334718   64861 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:40:51.335497   64861 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:40:51.338492   64861 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:40:51.340453   64861 out.go:204]   - Booting up control plane ...
	I0315 07:40:51.340607   64861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:40:51.340690   64861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:40:51.341096   64861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:40:51.374447   64861 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:40:51.374601   64861 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:40:51.374668   64861 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:40:51.546072   64861 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:40:50.613321   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:50.613781   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:50.613811   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:50.613731   67410 retry.go:31] will retry after 2.217478647s: waiting for machine to come up
	I0315 07:40:52.834421   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:52.834945   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:52.834974   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:52.834888   67410 retry.go:31] will retry after 2.455813691s: waiting for machine to come up
	I0315 07:40:58.045977   64861 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.501724 seconds
	I0315 07:40:58.046109   64861 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:40:58.068687   64861 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:40:58.601077   64861 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:40:58.601300   64861 kubeadm.go:309] [mark-control-plane] Marking the node calico-636355 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:40:59.117700   64861 kubeadm.go:309] [bootstrap-token] Using token: bvgpqo.jwy28ld1t976b5da
	I0315 07:40:55.292563   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:55.293031   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:55.293051   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:55.292997   67410 retry.go:31] will retry after 4.143632436s: waiting for machine to come up
	I0315 07:40:59.119165   64861 out.go:204]   - Configuring RBAC rules ...
	I0315 07:40:59.119263   64861 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:40:59.125309   64861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:40:59.132836   64861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:40:59.137760   64861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:40:59.144950   64861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:40:59.148359   64861 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:40:59.164730   64861 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:40:59.428170   64861 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:40:59.538347   64861 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:40:59.539193   64861 kubeadm.go:309] 
	I0315 07:40:59.539256   64861 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:40:59.539274   64861 kubeadm.go:309] 
	I0315 07:40:59.539356   64861 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:40:59.539367   64861 kubeadm.go:309] 
	I0315 07:40:59.539392   64861 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:40:59.539493   64861 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:40:59.539567   64861 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:40:59.539586   64861 kubeadm.go:309] 
	I0315 07:40:59.539629   64861 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:40:59.539637   64861 kubeadm.go:309] 
	I0315 07:40:59.539719   64861 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:40:59.539727   64861 kubeadm.go:309] 
	I0315 07:40:59.539797   64861 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:40:59.539885   64861 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:40:59.539965   64861 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:40:59.539981   64861 kubeadm.go:309] 
	I0315 07:40:59.540098   64861 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:40:59.540203   64861 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:40:59.540214   64861 kubeadm.go:309] 
	I0315 07:40:59.540297   64861 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bvgpqo.jwy28ld1t976b5da \
	I0315 07:40:59.540406   64861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:40:59.540431   64861 kubeadm.go:309] 	--control-plane 
	I0315 07:40:59.540437   64861 kubeadm.go:309] 
	I0315 07:40:59.540521   64861 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:40:59.540529   64861 kubeadm.go:309] 
	I0315 07:40:59.540635   64861 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bvgpqo.jwy28ld1t976b5da \
	I0315 07:40:59.540803   64861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:40:59.541615   64861 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:40:59.541652   64861 cni.go:84] Creating CNI manager for "calico"
	I0315 07:40:59.543292   64861 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0315 07:40:59.545023   64861 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 07:40:59.545047   64861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (252439 bytes)
	I0315 07:40:59.592601   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 07:41:01.631929   64861 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.03928838s)
	I0315 07:41:01.632002   64861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:41:01.632054   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:01.632108   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-636355 minikube.k8s.io/updated_at=2024_03_15T07_41_01_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=calico-636355 minikube.k8s.io/primary=true
	I0315 07:41:01.780196   64861 ops.go:34] apiserver oom_adj: -16
	I0315 07:41:01.780353   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:02.280703   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:02.781225   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:03.281039   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:40:59.438210   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:40:59.438817   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find current IP address of domain custom-flannel-636355 in network mk-custom-flannel-636355
	I0315 07:40:59.438846   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | I0315 07:40:59.438771   67410 retry.go:31] will retry after 5.612402111s: waiting for machine to come up
	I0315 07:41:03.780444   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:04.280818   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:04.780444   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:05.280715   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:05.780936   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:06.281310   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:06.781349   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:07.281379   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:07.781221   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:08.280593   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:05.054673   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:05.055205   67253 main.go:141] libmachine: (custom-flannel-636355) Found IP for machine: 192.168.39.210
	I0315 07:41:05.055240   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has current primary IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:05.055255   67253 main.go:141] libmachine: (custom-flannel-636355) Reserving static IP address...
	I0315 07:41:05.055538   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find host DHCP lease matching {name: "custom-flannel-636355", mac: "52:54:00:5f:1c:43", ip: "192.168.39.210"} in network mk-custom-flannel-636355
	I0315 07:41:05.133422   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Getting to WaitForSSH function...
	I0315 07:41:05.133449   67253 main.go:141] libmachine: (custom-flannel-636355) Reserved static IP address: 192.168.39.210
	I0315 07:41:05.133484   67253 main.go:141] libmachine: (custom-flannel-636355) Waiting for SSH to be available...
	I0315 07:41:05.136609   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:05.136858   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355
	I0315 07:41:05.136886   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | unable to find defined IP address of network mk-custom-flannel-636355 interface with MAC address 52:54:00:5f:1c:43
	I0315 07:41:05.137059   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Using SSH client type: external
	I0315 07:41:05.137082   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa (-rw-------)
	I0315 07:41:05.137134   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:41:05.137174   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | About to run SSH command:
	I0315 07:41:05.137191   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | exit 0
	I0315 07:41:05.140953   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | SSH cmd err, output: exit status 255: 
	I0315 07:41:05.140977   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0315 07:41:05.140986   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | command : exit 0
	I0315 07:41:05.141006   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | err     : exit status 255
	I0315 07:41:05.141021   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | output  : 
	I0315 07:41:08.141231   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Getting to WaitForSSH function...
	I0315 07:41:08.143960   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.144539   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.144588   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.144748   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Using SSH client type: external
	I0315 07:41:08.144779   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa (-rw-------)
	I0315 07:41:08.144818   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:41:08.144855   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | About to run SSH command:
	I0315 07:41:08.144876   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | exit 0
	I0315 07:41:08.272932   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | SSH cmd err, output: <nil>: 
	I0315 07:41:08.273238   67253 main.go:141] libmachine: (custom-flannel-636355) KVM machine creation complete!
	I0315 07:41:08.273568   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetConfigRaw
	I0315 07:41:08.274220   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:08.274431   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:08.274599   67253 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 07:41:08.274612   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetState
	I0315 07:41:08.276163   67253 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 07:41:08.276180   67253 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 07:41:08.276189   67253 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 07:41:08.276197   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:08.278720   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.279099   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.279133   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.279297   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:08.279469   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.279625   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.279762   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:08.279932   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:08.280159   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:08.280171   67253 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 07:41:08.392073   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:41:08.392093   67253 main.go:141] libmachine: Detecting the provisioner...
	I0315 07:41:08.392101   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:08.395220   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.395552   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.395585   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.395697   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:08.395907   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.396110   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.396268   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:08.396429   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:08.396661   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:08.396679   67253 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 07:41:08.513529   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 07:41:08.513606   67253 main.go:141] libmachine: found compatible host: buildroot
	I0315 07:41:08.513615   67253 main.go:141] libmachine: Provisioning with buildroot...
	I0315 07:41:08.513622   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetMachineName
	I0315 07:41:08.513903   67253 buildroot.go:166] provisioning hostname "custom-flannel-636355"
	I0315 07:41:08.513933   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetMachineName
	I0315 07:41:08.514210   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:08.517026   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.517452   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.517471   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.517626   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:08.517826   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.517977   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.518106   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:08.518233   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:08.518443   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:08.518456   67253 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-636355 && echo "custom-flannel-636355" | sudo tee /etc/hostname
	I0315 07:41:08.645009   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-636355
	
	I0315 07:41:08.645038   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:08.648245   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.648679   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.648708   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.648962   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:08.649148   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.649336   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:08.649521   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:08.649722   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:08.649909   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:08.649927   67253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-636355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-636355/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-636355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:41:08.776117   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:41:08.776144   67253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:41:08.776194   67253 buildroot.go:174] setting up certificates
	I0315 07:41:08.776212   67253 provision.go:84] configureAuth start
	I0315 07:41:08.776234   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetMachineName
	I0315 07:41:08.776553   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetIP
	I0315 07:41:08.779242   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.779573   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.779602   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.779755   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:08.782625   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.782998   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:08.783025   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:08.783190   67253 provision.go:143] copyHostCerts
	I0315 07:41:08.783241   67253 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:41:08.783249   67253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:41:08.783303   67253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:41:08.783383   67253 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:41:08.783397   67253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:41:08.783418   67253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:41:08.783463   67253 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:41:08.783470   67253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:41:08.783486   67253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:41:08.783527   67253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-636355 san=[127.0.0.1 192.168.39.210 custom-flannel-636355 localhost minikube]
	I0315 07:41:09.002529   67253 provision.go:177] copyRemoteCerts
	I0315 07:41:09.002594   67253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:41:09.002617   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.005467   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.005869   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.005901   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.006044   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.006242   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.006410   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.006580   67253 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa Username:docker}
	I0315 07:41:09.095616   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:41:09.122699   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:41:09.148950   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:41:09.175554   67253 provision.go:87] duration metric: took 399.321293ms to configureAuth
	I0315 07:41:09.175590   67253 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:41:09.175752   67253 config.go:182] Loaded profile config "custom-flannel-636355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:41:09.175832   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.178420   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.178761   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.178788   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.178941   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.179109   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.179273   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.179453   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.179614   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:09.179768   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:09.179783   67253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:41:09.742080   67845 start.go:364] duration metric: took 25.615741011s to acquireMachinesLock for "enable-default-cni-636355"
	I0315 07:41:09.742148   67845 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:41:09.742277   67845 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:41:08.780837   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:09.281319   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:09.780488   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:10.280436   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:10.780449   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:11.280658   64861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:11.466259   64861 kubeadm.go:1107] duration metric: took 9.834258681s to wait for elevateKubeSystemPrivileges
	W0315 07:41:11.466318   64861 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:41:11.466326   64861 kubeadm.go:393] duration metric: took 23.022664654s to StartCluster
	I0315 07:41:11.466346   64861 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:11.466437   64861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:41:11.467881   64861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:11.468181   64861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 07:41:11.468182   64861 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:41:11.470278   64861 out.go:177] * Verifying Kubernetes components...
	I0315 07:41:11.468276   64861 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:41:11.468489   64861 config.go:182] Loaded profile config "calico-636355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:41:11.470416   64861 addons.go:69] Setting storage-provisioner=true in profile "calico-636355"
	I0315 07:41:11.472052   64861 addons.go:234] Setting addon storage-provisioner=true in "calico-636355"
	I0315 07:41:11.470419   64861 addons.go:69] Setting default-storageclass=true in profile "calico-636355"
	I0315 07:41:11.472117   64861 host.go:66] Checking if "calico-636355" exists ...
	I0315 07:41:11.472157   64861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-636355"
	I0315 07:41:11.472552   64861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:11.472568   64861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:11.472604   64861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:11.472617   64861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:11.471990   64861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:41:11.492581   64861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0315 07:41:11.492747   64861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0315 07:41:11.493150   64861 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:11.493276   64861 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:11.493715   64861 main.go:141] libmachine: Using API Version  1
	I0315 07:41:11.493733   64861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:11.493898   64861 main.go:141] libmachine: Using API Version  1
	I0315 07:41:11.493910   64861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:11.494003   64861 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:11.494127   64861 main.go:141] libmachine: (calico-636355) Calling .GetState
	I0315 07:41:11.494195   64861 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:11.494712   64861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:11.494754   64861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:11.503465   64861 addons.go:234] Setting addon default-storageclass=true in "calico-636355"
	I0315 07:41:11.503503   64861 host.go:66] Checking if "calico-636355" exists ...
	I0315 07:41:11.503757   64861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:11.503791   64861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:11.519813   64861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43391
	I0315 07:41:11.520600   64861 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:11.521134   64861 main.go:141] libmachine: Using API Version  1
	I0315 07:41:11.521159   64861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:11.521555   64861 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:11.521716   64861 main.go:141] libmachine: (calico-636355) Calling .GetState
	I0315 07:41:11.525008   64861 main.go:141] libmachine: (calico-636355) Calling .DriverName
	I0315 07:41:09.480379   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:41:09.480411   67253 main.go:141] libmachine: Checking connection to Docker...
	I0315 07:41:09.480422   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetURL
	I0315 07:41:09.481961   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | Using libvirt version 6000000
	I0315 07:41:09.484418   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.484933   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.484963   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.485173   67253 main.go:141] libmachine: Docker is up and running!
	I0315 07:41:09.485190   67253 main.go:141] libmachine: Reticulating splines...
	I0315 07:41:09.485197   67253 client.go:171] duration metric: took 29.348314999s to LocalClient.Create
	I0315 07:41:09.485219   67253 start.go:167] duration metric: took 29.348374521s to libmachine.API.Create "custom-flannel-636355"
	I0315 07:41:09.485229   67253 start.go:293] postStartSetup for "custom-flannel-636355" (driver="kvm2")
	I0315 07:41:09.485239   67253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:41:09.485254   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:09.485500   67253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:41:09.485529   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.487940   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.488437   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.488488   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.488646   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.488892   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.489059   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.489211   67253 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa Username:docker}
	I0315 07:41:09.575296   67253 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:41:09.580370   67253 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:41:09.580398   67253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:41:09.580487   67253 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:41:09.580588   67253 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:41:09.580704   67253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:41:09.590802   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:41:09.620536   67253 start.go:296] duration metric: took 135.291925ms for postStartSetup
	I0315 07:41:09.620589   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetConfigRaw
	I0315 07:41:09.621232   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetIP
	I0315 07:41:09.623929   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.624453   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.624493   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.624847   67253 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/config.json ...
	I0315 07:41:09.625092   67253 start.go:128] duration metric: took 29.514725178s to createHost
	I0315 07:41:09.625125   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.627761   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.628095   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.628120   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.628291   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.628492   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.628680   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.628827   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.629006   67253 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:09.629170   67253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0315 07:41:09.629180   67253 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:41:09.741898   67253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710488469.730740869
	
	I0315 07:41:09.741923   67253 fix.go:216] guest clock: 1710488469.730740869
	I0315 07:41:09.741933   67253 fix.go:229] Guest: 2024-03-15 07:41:09.730740869 +0000 UTC Remote: 2024-03-15 07:41:09.62511051 +0000 UTC m=+30.481828087 (delta=105.630359ms)
	I0315 07:41:09.741989   67253 fix.go:200] guest clock delta is within tolerance: 105.630359ms
	I0315 07:41:09.741996   67253 start.go:83] releasing machines lock for "custom-flannel-636355", held for 29.631784009s
	I0315 07:41:09.742025   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:09.742366   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetIP
	I0315 07:41:09.745378   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.745720   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.745754   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.745990   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:09.746627   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:09.746810   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .DriverName
	I0315 07:41:09.746897   67253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:41:09.746941   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.746996   67253 ssh_runner.go:195] Run: cat /version.json
	I0315 07:41:09.747022   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHHostname
	I0315 07:41:09.749940   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.750241   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.750377   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.750434   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.750493   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.750625   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:09.750646   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:09.750689   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.750816   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHPort
	I0315 07:41:09.750900   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.750974   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHKeyPath
	I0315 07:41:09.751144   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetSSHUsername
	I0315 07:41:09.751159   67253 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa Username:docker}
	I0315 07:41:09.751281   67253 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/custom-flannel-636355/id_rsa Username:docker}
	I0315 07:41:09.873914   67253 ssh_runner.go:195] Run: systemctl --version
	I0315 07:41:09.883347   67253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:41:10.060369   67253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:41:10.069941   67253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:41:10.070009   67253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:41:10.096034   67253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:41:10.096072   67253 start.go:494] detecting cgroup driver to use...
	I0315 07:41:10.096169   67253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:41:10.114391   67253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:41:10.129637   67253 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:41:10.129701   67253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:41:10.146224   67253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:41:10.161563   67253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:41:10.301942   67253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:41:10.479253   67253 docker.go:233] disabling docker service ...
	I0315 07:41:10.479326   67253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:41:10.495550   67253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:41:10.514845   67253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:41:10.648605   67253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:41:10.797680   67253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:41:10.815822   67253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:41:10.837898   67253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:41:10.837967   67253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:10.852036   67253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:41:10.852117   67253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:10.867367   67253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:10.881155   67253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:10.896261   67253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:41:10.910072   67253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:41:10.923905   67253 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:41:10.923968   67253 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:41:10.941483   67253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:41:10.955630   67253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:41:11.084274   67253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:41:11.371575   67253 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:41:11.371643   67253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:41:11.377088   67253 start.go:562] Will wait 60s for crictl version
	I0315 07:41:11.377173   67253 ssh_runner.go:195] Run: which crictl
	I0315 07:41:11.382567   67253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:41:11.427326   67253 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:41:11.427410   67253 ssh_runner.go:195] Run: crio --version
	I0315 07:41:11.478618   67253 ssh_runner.go:195] Run: crio --version
	I0315 07:41:11.527367   64861 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:41:11.528380   67253 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:41:11.528902   64861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0315 07:41:11.529322   64861 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:41:11.529336   64861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:41:11.529354   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHHostname
	I0315 07:41:11.529726   64861 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:11.530297   64861 main.go:141] libmachine: Using API Version  1
	I0315 07:41:11.530310   64861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:11.530754   64861 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:11.531361   64861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:11.531398   64861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:11.540131   64861 main.go:141] libmachine: (calico-636355) DBG | domain calico-636355 has defined MAC address 52:54:00:b4:82:cf in network mk-calico-636355
	I0315 07:41:11.541444   64861 main.go:141] libmachine: (calico-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:82:cf", ip: ""} in network mk-calico-636355: {Iface:virbr3 ExpiryTime:2024-03-15 08:40:30 +0000 UTC Type:0 Mac:52:54:00:b4:82:cf Iaid: IPaddr:192.168.61.29 Prefix:24 Hostname:calico-636355 Clientid:01:52:54:00:b4:82:cf}
	I0315 07:41:11.541471   64861 main.go:141] libmachine: (calico-636355) DBG | domain calico-636355 has defined IP address 192.168.61.29 and MAC address 52:54:00:b4:82:cf in network mk-calico-636355
	I0315 07:41:11.542001   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHPort
	I0315 07:41:11.542240   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHKeyPath
	I0315 07:41:11.542644   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHUsername
	I0315 07:41:11.542788   64861 sshutil.go:53] new ssh client: &{IP:192.168.61.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/calico-636355/id_rsa Username:docker}
	I0315 07:41:11.553615   64861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0315 07:41:11.554063   64861 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:11.554680   64861 main.go:141] libmachine: Using API Version  1
	I0315 07:41:11.554697   64861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:11.555090   64861 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:11.555319   64861 main.go:141] libmachine: (calico-636355) Calling .GetState
	I0315 07:41:11.557448   64861 main.go:141] libmachine: (calico-636355) Calling .DriverName
	I0315 07:41:11.557772   64861 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:41:11.557787   64861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:41:11.557804   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHHostname
	I0315 07:41:11.560755   64861 main.go:141] libmachine: (calico-636355) DBG | domain calico-636355 has defined MAC address 52:54:00:b4:82:cf in network mk-calico-636355
	I0315 07:41:11.561198   64861 main.go:141] libmachine: (calico-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:82:cf", ip: ""} in network mk-calico-636355: {Iface:virbr3 ExpiryTime:2024-03-15 08:40:30 +0000 UTC Type:0 Mac:52:54:00:b4:82:cf Iaid: IPaddr:192.168.61.29 Prefix:24 Hostname:calico-636355 Clientid:01:52:54:00:b4:82:cf}
	I0315 07:41:11.561220   64861 main.go:141] libmachine: (calico-636355) DBG | domain calico-636355 has defined IP address 192.168.61.29 and MAC address 52:54:00:b4:82:cf in network mk-calico-636355
	I0315 07:41:11.561394   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHPort
	I0315 07:41:11.561646   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHKeyPath
	I0315 07:41:11.561828   64861 main.go:141] libmachine: (calico-636355) Calling .GetSSHUsername
	I0315 07:41:11.562014   64861 sshutil.go:53] new ssh client: &{IP:192.168.61.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/calico-636355/id_rsa Username:docker}
	I0315 07:41:11.886635   64861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 07:41:11.886765   64861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:41:12.139053   64861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:41:12.159514   64861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:41:09.744482   67845 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0315 07:41:09.744666   67845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:41:09.744723   67845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:41:09.761203   67845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0315 07:41:09.761626   67845 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:41:09.762216   67845 main.go:141] libmachine: Using API Version  1
	I0315 07:41:09.762236   67845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:41:09.762566   67845 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:41:09.762765   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetMachineName
	I0315 07:41:09.762899   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:09.763039   67845 start.go:159] libmachine.API.Create for "enable-default-cni-636355" (driver="kvm2")
	I0315 07:41:09.763065   67845 client.go:168] LocalClient.Create starting
	I0315 07:41:09.763091   67845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:41:09.763122   67845 main.go:141] libmachine: Decoding PEM data...
	I0315 07:41:09.763135   67845 main.go:141] libmachine: Parsing certificate...
	I0315 07:41:09.763190   67845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:41:09.763208   67845 main.go:141] libmachine: Decoding PEM data...
	I0315 07:41:09.763219   67845 main.go:141] libmachine: Parsing certificate...
	I0315 07:41:09.763239   67845 main.go:141] libmachine: Running pre-create checks...
	I0315 07:41:09.763247   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .PreCreateCheck
	I0315 07:41:09.763633   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetConfigRaw
	I0315 07:41:09.764048   67845 main.go:141] libmachine: Creating machine...
	I0315 07:41:09.764061   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .Create
	I0315 07:41:09.764216   67845 main.go:141] libmachine: (enable-default-cni-636355) Creating KVM machine...
	I0315 07:41:09.765671   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found existing default KVM network
	I0315 07:41:09.767048   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:09.766873   68012 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d3:e2:82} reservation:<nil>}
	I0315 07:41:09.768533   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:09.768427   68012 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028aab0}
	I0315 07:41:09.768559   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | created network xml: 
	I0315 07:41:09.768569   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | <network>
	I0315 07:41:09.768580   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   <name>mk-enable-default-cni-636355</name>
	I0315 07:41:09.768593   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   <dns enable='no'/>
	I0315 07:41:09.768601   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   
	I0315 07:41:09.768616   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0315 07:41:09.768624   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |     <dhcp>
	I0315 07:41:09.768633   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0315 07:41:09.768648   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |     </dhcp>
	I0315 07:41:09.768656   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   </ip>
	I0315 07:41:09.768665   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG |   
	I0315 07:41:09.768671   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | </network>
	I0315 07:41:09.768680   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | 
	I0315 07:41:09.774726   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | trying to create private KVM network mk-enable-default-cni-636355 192.168.50.0/24...
	I0315 07:41:09.858025   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | private KVM network mk-enable-default-cni-636355 192.168.50.0/24 created
	I0315 07:41:09.858055   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:09.857997   68012 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:41:09.858082   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355 ...
	I0315 07:41:09.858099   67845 main.go:141] libmachine: (enable-default-cni-636355) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:41:09.858241   67845 main.go:141] libmachine: (enable-default-cni-636355) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:41:10.103106   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:10.102936   68012 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa...
	I0315 07:41:10.358792   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:10.358592   68012 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/enable-default-cni-636355.rawdisk...
	I0315 07:41:10.358827   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Writing magic tar header
	I0315 07:41:10.358846   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Writing SSH key tar header
	I0315 07:41:10.358861   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:10.358710   68012 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355 ...
	I0315 07:41:10.358876   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355 (perms=drwx------)
	I0315 07:41:10.358897   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:41:10.358911   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:41:10.358926   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355
	I0315 07:41:10.358957   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:41:10.358984   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:41:10.358996   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:41:10.359016   67845 main.go:141] libmachine: (enable-default-cni-636355) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:41:10.359027   67845 main.go:141] libmachine: (enable-default-cni-636355) Creating domain...
	I0315 07:41:10.359042   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:41:10.359055   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:41:10.359070   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:41:10.359096   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:41:10.359108   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Checking permissions on dir: /home
	I0315 07:41:10.359117   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Skipping /home - not owner
	I0315 07:41:10.360411   67845 main.go:141] libmachine: (enable-default-cni-636355) define libvirt domain using xml: 
	I0315 07:41:10.360449   67845 main.go:141] libmachine: (enable-default-cni-636355) <domain type='kvm'>
	I0315 07:41:10.360496   67845 main.go:141] libmachine: (enable-default-cni-636355)   <name>enable-default-cni-636355</name>
	I0315 07:41:10.360517   67845 main.go:141] libmachine: (enable-default-cni-636355)   <memory unit='MiB'>3072</memory>
	I0315 07:41:10.360539   67845 main.go:141] libmachine: (enable-default-cni-636355)   <vcpu>2</vcpu>
	I0315 07:41:10.360551   67845 main.go:141] libmachine: (enable-default-cni-636355)   <features>
	I0315 07:41:10.360569   67845 main.go:141] libmachine: (enable-default-cni-636355)     <acpi/>
	I0315 07:41:10.360580   67845 main.go:141] libmachine: (enable-default-cni-636355)     <apic/>
	I0315 07:41:10.360592   67845 main.go:141] libmachine: (enable-default-cni-636355)     <pae/>
	I0315 07:41:10.360603   67845 main.go:141] libmachine: (enable-default-cni-636355)     
	I0315 07:41:10.360612   67845 main.go:141] libmachine: (enable-default-cni-636355)   </features>
	I0315 07:41:10.360619   67845 main.go:141] libmachine: (enable-default-cni-636355)   <cpu mode='host-passthrough'>
	I0315 07:41:10.360644   67845 main.go:141] libmachine: (enable-default-cni-636355)   
	I0315 07:41:10.360659   67845 main.go:141] libmachine: (enable-default-cni-636355)   </cpu>
	I0315 07:41:10.360674   67845 main.go:141] libmachine: (enable-default-cni-636355)   <os>
	I0315 07:41:10.360686   67845 main.go:141] libmachine: (enable-default-cni-636355)     <type>hvm</type>
	I0315 07:41:10.360700   67845 main.go:141] libmachine: (enable-default-cni-636355)     <boot dev='cdrom'/>
	I0315 07:41:10.360706   67845 main.go:141] libmachine: (enable-default-cni-636355)     <boot dev='hd'/>
	I0315 07:41:10.360718   67845 main.go:141] libmachine: (enable-default-cni-636355)     <bootmenu enable='no'/>
	I0315 07:41:10.360725   67845 main.go:141] libmachine: (enable-default-cni-636355)   </os>
	I0315 07:41:10.360739   67845 main.go:141] libmachine: (enable-default-cni-636355)   <devices>
	I0315 07:41:10.360751   67845 main.go:141] libmachine: (enable-default-cni-636355)     <disk type='file' device='cdrom'>
	I0315 07:41:10.360765   67845 main.go:141] libmachine: (enable-default-cni-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/boot2docker.iso'/>
	I0315 07:41:10.360776   67845 main.go:141] libmachine: (enable-default-cni-636355)       <target dev='hdc' bus='scsi'/>
	I0315 07:41:10.360788   67845 main.go:141] libmachine: (enable-default-cni-636355)       <readonly/>
	I0315 07:41:10.360798   67845 main.go:141] libmachine: (enable-default-cni-636355)     </disk>
	I0315 07:41:10.360808   67845 main.go:141] libmachine: (enable-default-cni-636355)     <disk type='file' device='disk'>
	I0315 07:41:10.360818   67845 main.go:141] libmachine: (enable-default-cni-636355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:41:10.360836   67845 main.go:141] libmachine: (enable-default-cni-636355)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/enable-default-cni-636355.rawdisk'/>
	I0315 07:41:10.360847   67845 main.go:141] libmachine: (enable-default-cni-636355)       <target dev='hda' bus='virtio'/>
	I0315 07:41:10.360858   67845 main.go:141] libmachine: (enable-default-cni-636355)     </disk>
	I0315 07:41:10.360870   67845 main.go:141] libmachine: (enable-default-cni-636355)     <interface type='network'>
	I0315 07:41:10.360894   67845 main.go:141] libmachine: (enable-default-cni-636355)       <source network='mk-enable-default-cni-636355'/>
	I0315 07:41:10.360905   67845 main.go:141] libmachine: (enable-default-cni-636355)       <model type='virtio'/>
	I0315 07:41:10.360913   67845 main.go:141] libmachine: (enable-default-cni-636355)     </interface>
	I0315 07:41:10.360930   67845 main.go:141] libmachine: (enable-default-cni-636355)     <interface type='network'>
	I0315 07:41:10.360942   67845 main.go:141] libmachine: (enable-default-cni-636355)       <source network='default'/>
	I0315 07:41:10.360952   67845 main.go:141] libmachine: (enable-default-cni-636355)       <model type='virtio'/>
	I0315 07:41:10.360963   67845 main.go:141] libmachine: (enable-default-cni-636355)     </interface>
	I0315 07:41:10.360978   67845 main.go:141] libmachine: (enable-default-cni-636355)     <serial type='pty'>
	I0315 07:41:10.360992   67845 main.go:141] libmachine: (enable-default-cni-636355)       <target port='0'/>
	I0315 07:41:10.361003   67845 main.go:141] libmachine: (enable-default-cni-636355)     </serial>
	I0315 07:41:10.361012   67845 main.go:141] libmachine: (enable-default-cni-636355)     <console type='pty'>
	I0315 07:41:10.361024   67845 main.go:141] libmachine: (enable-default-cni-636355)       <target type='serial' port='0'/>
	I0315 07:41:10.361037   67845 main.go:141] libmachine: (enable-default-cni-636355)     </console>
	I0315 07:41:10.361061   67845 main.go:141] libmachine: (enable-default-cni-636355)     <rng model='virtio'>
	I0315 07:41:10.361075   67845 main.go:141] libmachine: (enable-default-cni-636355)       <backend model='random'>/dev/random</backend>
	I0315 07:41:10.361083   67845 main.go:141] libmachine: (enable-default-cni-636355)     </rng>
	I0315 07:41:10.361092   67845 main.go:141] libmachine: (enable-default-cni-636355)     
	I0315 07:41:10.361103   67845 main.go:141] libmachine: (enable-default-cni-636355)     
	I0315 07:41:10.361113   67845 main.go:141] libmachine: (enable-default-cni-636355)   </devices>
	I0315 07:41:10.361123   67845 main.go:141] libmachine: (enable-default-cni-636355) </domain>
	I0315 07:41:10.361135   67845 main.go:141] libmachine: (enable-default-cni-636355) 
	I0315 07:41:10.365844   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:22:65:11 in network default
	I0315 07:41:10.366616   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:10.366643   67845 main.go:141] libmachine: (enable-default-cni-636355) Ensuring networks are active...
	I0315 07:41:10.367569   67845 main.go:141] libmachine: (enable-default-cni-636355) Ensuring network default is active
	I0315 07:41:10.367906   67845 main.go:141] libmachine: (enable-default-cni-636355) Ensuring network mk-enable-default-cni-636355 is active
	I0315 07:41:10.368568   67845 main.go:141] libmachine: (enable-default-cni-636355) Getting domain xml...
	I0315 07:41:10.369645   67845 main.go:141] libmachine: (enable-default-cni-636355) Creating domain...
	I0315 07:41:12.096780   67845 main.go:141] libmachine: (enable-default-cni-636355) Waiting to get IP...
	I0315 07:41:12.097949   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:12.098448   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:12.098480   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:12.098425   68012 retry.go:31] will retry after 241.643239ms: waiting for machine to come up
	I0315 07:41:12.342191   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:12.342916   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:12.342944   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:12.342879   68012 retry.go:31] will retry after 327.719596ms: waiting for machine to come up
	I0315 07:41:12.672701   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:12.673369   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:12.673397   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:12.673331   68012 retry.go:31] will retry after 456.348279ms: waiting for machine to come up
	I0315 07:41:13.132067   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:13.132605   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:13.132634   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:13.132543   68012 retry.go:31] will retry after 593.614242ms: waiting for machine to come up
	I0315 07:41:13.728437   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:13.729146   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:13.729177   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:13.729052   68012 retry.go:31] will retry after 663.744184ms: waiting for machine to come up
	I0315 07:41:11.530080   67253 main.go:141] libmachine: (custom-flannel-636355) Calling .GetIP
	I0315 07:41:11.541264   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:11.542234   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:1c:43", ip: ""} in network mk-custom-flannel-636355: {Iface:virbr1 ExpiryTime:2024-03-15 08:40:56 +0000 UTC Type:0 Mac:52:54:00:5f:1c:43 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:custom-flannel-636355 Clientid:01:52:54:00:5f:1c:43}
	I0315 07:41:11.542255   67253 main.go:141] libmachine: (custom-flannel-636355) DBG | domain custom-flannel-636355 has defined IP address 192.168.39.210 and MAC address 52:54:00:5f:1c:43 in network mk-custom-flannel-636355
	I0315 07:41:11.542671   67253 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:41:11.548978   67253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:41:11.568116   67253 kubeadm.go:877] updating cluster {Name:custom-flannel-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:custom-flannel-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:41:11.568294   67253 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:41:11.568357   67253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:41:11.616302   67253 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:41:11.616368   67253 ssh_runner.go:195] Run: which lz4
	I0315 07:41:11.621940   67253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:41:11.627029   67253 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:41:11.627069   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:41:13.655037   67253 crio.go:444] duration metric: took 2.033132222s to copy over tarball
	I0315 07:41:13.655115   67253 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:41:14.264713   64861 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.37790695s)
	I0315 07:41:14.264796   64861 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.378122217s)
	I0315 07:41:14.264819   64861 start.go:948] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0315 07:41:14.265908   64861 node_ready.go:35] waiting up to 15m0s for node "calico-636355" to be "Ready" ...
	I0315 07:41:14.510841   64861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.351286097s)
	I0315 07:41:14.510907   64861 main.go:141] libmachine: Making call to close driver server
	I0315 07:41:14.510924   64861 main.go:141] libmachine: (calico-636355) Calling .Close
	I0315 07:41:14.510988   64861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.371897538s)
	I0315 07:41:14.511043   64861 main.go:141] libmachine: Making call to close driver server
	I0315 07:41:14.511068   64861 main.go:141] libmachine: (calico-636355) Calling .Close
	I0315 07:41:14.511343   64861 main.go:141] libmachine: (calico-636355) DBG | Closing plugin on server side
	I0315 07:41:14.511380   64861 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:41:14.511388   64861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:41:14.511397   64861 main.go:141] libmachine: Making call to close driver server
	I0315 07:41:14.511397   64861 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:41:14.511404   64861 main.go:141] libmachine: (calico-636355) Calling .Close
	I0315 07:41:14.511410   64861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:41:14.511421   64861 main.go:141] libmachine: Making call to close driver server
	I0315 07:41:14.511428   64861 main.go:141] libmachine: (calico-636355) Calling .Close
	I0315 07:41:14.512631   64861 main.go:141] libmachine: (calico-636355) DBG | Closing plugin on server side
	I0315 07:41:14.512630   64861 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:41:14.512660   64861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:41:14.516407   64861 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:41:14.516429   64861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:41:14.516430   64861 main.go:141] libmachine: (calico-636355) DBG | Closing plugin on server side
	I0315 07:41:14.527889   64861 main.go:141] libmachine: Making call to close driver server
	I0315 07:41:14.527916   64861 main.go:141] libmachine: (calico-636355) Calling .Close
	I0315 07:41:14.528243   64861 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:41:14.528263   64861 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:41:14.530451   64861 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0315 07:41:14.531783   64861 addons.go:505] duration metric: took 3.063507787s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0315 07:41:14.774229   64861 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-636355" context rescaled to 1 replicas
	I0315 07:41:16.271034   64861 node_ready.go:53] node "calico-636355" has status "Ready":"False"
	I0315 07:41:14.394881   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:14.395330   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:14.395356   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:14.395293   68012 retry.go:31] will retry after 881.078413ms: waiting for machine to come up
	I0315 07:41:15.277738   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:15.278429   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:15.278454   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:15.278375   68012 retry.go:31] will retry after 1.156797279s: waiting for machine to come up
	I0315 07:41:16.438137   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:16.438903   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:16.438932   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:16.438849   68012 retry.go:31] will retry after 1.103236005s: waiting for machine to come up
	I0315 07:41:17.544347   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:17.544863   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:17.544893   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:17.544827   68012 retry.go:31] will retry after 1.521610544s: waiting for machine to come up
	I0315 07:41:16.755462   67253 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.100317143s)
	I0315 07:41:16.755496   67253 crio.go:451] duration metric: took 3.100429638s to extract the tarball
	I0315 07:41:16.755506   67253 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:41:16.801305   67253 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:41:16.851457   67253 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:41:16.851484   67253 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:41:16.851493   67253 kubeadm.go:928] updating node { 192.168.39.210 8443 v1.28.4 crio true true} ...
	I0315 07:41:16.851647   67253 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-636355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0315 07:41:16.851739   67253 ssh_runner.go:195] Run: crio config
	I0315 07:41:16.916980   67253 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0315 07:41:16.917029   67253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:41:16.917060   67253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-636355 NodeName:custom-flannel-636355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:41:16.917244   67253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-636355"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:41:16.917317   67253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:41:16.933197   67253 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:41:16.933272   67253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:41:16.948014   67253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0315 07:41:16.971103   67253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:41:17.001363   67253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0315 07:41:17.024533   67253 ssh_runner.go:195] Run: grep 192.168.39.210	control-plane.minikube.internal$ /etc/hosts
	I0315 07:41:17.029711   67253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:41:17.049136   67253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:41:17.180706   67253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:41:17.203308   67253 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355 for IP: 192.168.39.210
	I0315 07:41:17.203336   67253 certs.go:194] generating shared ca certs ...
	I0315 07:41:17.203355   67253 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:17.203509   67253 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:41:17.203590   67253 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:41:17.203603   67253 certs.go:256] generating profile certs ...
	I0315 07:41:17.203676   67253 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.key
	I0315 07:41:17.203694   67253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.crt with IP's: []
	I0315 07:41:17.814275   67253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.crt ...
	I0315 07:41:17.814301   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.crt: {Name:mk667ac2586da95ec01fef52849ca1481f84bb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:17.814503   67253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.key ...
	I0315 07:41:17.814525   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/client.key: {Name:mk1594568d8a9b19113db83f780ba16047e5c701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:17.814666   67253 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key.62386b04
	I0315 07:41:17.814693   67253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt.62386b04 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210]
	I0315 07:41:17.976379   67253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt.62386b04 ...
	I0315 07:41:17.976410   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt.62386b04: {Name:mk2f2b565893782cd0620fba18335d75afeb073e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:17.976607   67253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key.62386b04 ...
	I0315 07:41:17.976627   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key.62386b04: {Name:mk612fbd2c778cc4264174d0d7577ca697262c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:17.976697   67253 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt.62386b04 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt
	I0315 07:41:17.976777   67253 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key.62386b04 -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key
	I0315 07:41:17.976840   67253 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.key
	I0315 07:41:17.976855   67253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.crt with IP's: []
	I0315 07:41:18.202952   67253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.crt ...
	I0315 07:41:18.202985   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.crt: {Name:mk1e6a0531379f09440c5944beee7a85a1cf1c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:18.203181   67253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.key ...
	I0315 07:41:18.203199   67253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.key: {Name:mk7d889b18666fb29e48c78b921fbe48a6777e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:41:18.203385   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:41:18.203427   67253 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:41:18.203441   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:41:18.203464   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:41:18.203487   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:41:18.203505   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:41:18.203548   67253 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:41:18.204101   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:41:18.241405   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:41:18.276037   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:41:18.309961   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:41:18.347905   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:41:18.382855   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:41:18.443876   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:41:18.476081   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/custom-flannel-636355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:41:18.518165   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:41:18.554328   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:41:18.586352   67253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:41:18.623057   67253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:41:18.644866   67253 ssh_runner.go:195] Run: openssl version
	I0315 07:41:18.651899   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:41:18.666100   67253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:41:18.671544   67253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:41:18.671641   67253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:41:18.678562   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:41:18.690420   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:41:18.703194   67253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:41:18.710144   67253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:41:18.710214   67253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:41:18.718601   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:41:18.733054   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:41:18.748882   67253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:41:18.758823   67253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:41:18.758898   67253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:41:18.769759   67253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:41:18.791719   67253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:41:18.797560   67253 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:41:18.797624   67253 kubeadm.go:391] StartCluster: {Name:custom-flannel-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:custom-flannel-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:41:18.797715   67253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:41:18.797772   67253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:41:18.851818   67253 cri.go:89] found id: ""
	I0315 07:41:18.851892   67253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:41:18.863695   67253 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:41:18.876818   67253 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:41:18.890385   67253 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:41:18.890405   67253 kubeadm.go:156] found existing configuration files:
	
	I0315 07:41:18.890457   67253 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:41:18.904140   67253 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:41:18.904264   67253 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:41:18.917033   67253 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:41:18.927372   67253 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:41:18.927437   67253 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:41:18.940140   67253 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:41:18.951066   67253 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:41:18.951138   67253 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:41:18.961769   67253 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:41:18.971720   67253 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:41:18.971799   67253 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:41:18.982199   67253 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:41:19.189667   67253 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:41:18.738872   64861 node_ready.go:53] node "calico-636355" has status "Ready":"False"
	I0315 07:41:20.772207   64861 node_ready.go:53] node "calico-636355" has status "Ready":"False"
	I0315 07:41:22.775964   64861 node_ready.go:49] node "calico-636355" has status "Ready":"True"
	I0315 07:41:22.776000   64861 node_ready.go:38] duration metric: took 8.509915962s for node "calico-636355" to be "Ready" ...
	I0315 07:41:22.776012   64861 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:41:22.789139   64861 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace to be "Ready" ...
	I0315 07:41:19.069257   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:19.069836   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:19.069859   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:19.069790   68012 retry.go:31] will retry after 1.498184769s: waiting for machine to come up
	I0315 07:41:20.569363   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:20.569907   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:20.569932   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:20.569857   68012 retry.go:31] will retry after 2.231762052s: waiting for machine to come up
	I0315 07:41:22.803515   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:22.804228   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:22.804258   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:22.804176   68012 retry.go:31] will retry after 3.127408595s: waiting for machine to come up
	I0315 07:41:24.799226   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:27.295460   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:25.933926   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:25.934520   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:25.934549   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:25.934470   68012 retry.go:31] will retry after 2.923370253s: waiting for machine to come up
	I0315 07:41:28.859681   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:28.860171   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find current IP address of domain enable-default-cni-636355 in network mk-enable-default-cni-636355
	I0315 07:41:28.860202   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | I0315 07:41:28.860136   68012 retry.go:31] will retry after 3.981762119s: waiting for machine to come up
	I0315 07:41:30.314390   67253 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:41:30.314532   67253 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:41:30.314636   67253 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:41:30.314810   67253 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:41:30.314983   67253 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:41:30.315096   67253 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:41:30.316841   67253 out.go:204]   - Generating certificates and keys ...
	I0315 07:41:30.316946   67253 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:41:30.317074   67253 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:41:30.317192   67253 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:41:30.317293   67253 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:41:30.317384   67253 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:41:30.317457   67253 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:41:30.317527   67253 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:41:30.317709   67253 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-636355 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0315 07:41:30.317781   67253 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:41:30.317958   67253 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-636355 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0315 07:41:30.318051   67253 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:41:30.318151   67253 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:41:30.318222   67253 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:41:30.318310   67253 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:41:30.318424   67253 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:41:30.318506   67253 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:41:30.318594   67253 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:41:30.318669   67253 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:41:30.318794   67253 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:41:30.318877   67253 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:41:30.320583   67253 out.go:204]   - Booting up control plane ...
	I0315 07:41:30.320700   67253 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:41:30.320786   67253 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:41:30.320899   67253 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:41:30.321049   67253 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:41:30.321175   67253 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:41:30.321228   67253 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:41:30.321422   67253 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:41:30.321506   67253 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505800 seconds
	I0315 07:41:30.321639   67253 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:41:30.321817   67253 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:41:30.321918   67253 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:41:30.322147   67253 kubeadm.go:309] [mark-control-plane] Marking the node custom-flannel-636355 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:41:30.322226   67253 kubeadm.go:309] [bootstrap-token] Using token: 6dyofh.dmuhgjotaeao8ngz
	I0315 07:41:30.323713   67253 out.go:204]   - Configuring RBAC rules ...
	I0315 07:41:30.323852   67253 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:41:30.323952   67253 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:41:30.324152   67253 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:41:30.324265   67253 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:41:30.324372   67253 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:41:30.324448   67253 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:41:30.324579   67253 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:41:30.324630   67253 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:41:30.324690   67253 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:41:30.324698   67253 kubeadm.go:309] 
	I0315 07:41:30.324784   67253 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:41:30.324795   67253 kubeadm.go:309] 
	I0315 07:41:30.324873   67253 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:41:30.324892   67253 kubeadm.go:309] 
	I0315 07:41:30.324931   67253 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:41:30.325010   67253 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:41:30.325089   67253 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:41:30.325107   67253 kubeadm.go:309] 
	I0315 07:41:30.325176   67253 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:41:30.325185   67253 kubeadm.go:309] 
	I0315 07:41:30.325231   67253 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:41:30.325240   67253 kubeadm.go:309] 
	I0315 07:41:30.325287   67253 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:41:30.325353   67253 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:41:30.325433   67253 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:41:30.325447   67253 kubeadm.go:309] 
	I0315 07:41:30.325538   67253 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:41:30.325639   67253 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:41:30.325650   67253 kubeadm.go:309] 
	I0315 07:41:30.325722   67253 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6dyofh.dmuhgjotaeao8ngz \
	I0315 07:41:30.325813   67253 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:41:30.325854   67253 kubeadm.go:309] 	--control-plane 
	I0315 07:41:30.325863   67253 kubeadm.go:309] 
	I0315 07:41:30.325976   67253 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:41:30.325991   67253 kubeadm.go:309] 
	I0315 07:41:30.326116   67253 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6dyofh.dmuhgjotaeao8ngz \
	I0315 07:41:30.326269   67253 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:41:30.326283   67253 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0315 07:41:30.327773   67253 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0315 07:41:29.297465   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:31.298136   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:33.302572   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:32.845385   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:32.845907   67845 main.go:141] libmachine: (enable-default-cni-636355) Found IP for machine: 192.168.50.153
	I0315 07:41:32.845936   67845 main.go:141] libmachine: (enable-default-cni-636355) Reserving static IP address...
	I0315 07:41:32.845951   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has current primary IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:32.846305   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-636355", mac: "52:54:00:95:9d:19", ip: "192.168.50.153"} in network mk-enable-default-cni-636355
	I0315 07:41:32.933854   67845 main.go:141] libmachine: (enable-default-cni-636355) Reserved static IP address: 192.168.50.153
	I0315 07:41:32.933886   67845 main.go:141] libmachine: (enable-default-cni-636355) Waiting for SSH to be available...
	I0315 07:41:32.933896   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Getting to WaitForSSH function...
	I0315 07:41:32.936510   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:32.936972   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:32.937000   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:32.937174   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Using SSH client type: external
	I0315 07:41:32.937205   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa (-rw-------)
	I0315 07:41:32.937247   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:41:32.937256   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | About to run SSH command:
	I0315 07:41:32.937268   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | exit 0
	I0315 07:41:33.074860   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | SSH cmd err, output: <nil>: 
	I0315 07:41:33.075230   67845 main.go:141] libmachine: (enable-default-cni-636355) KVM machine creation complete!
	I0315 07:41:33.075601   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetConfigRaw
	I0315 07:41:33.076163   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:33.076376   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:33.076578   67845 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 07:41:33.076592   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetState
	I0315 07:41:33.078160   67845 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 07:41:33.078179   67845 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 07:41:33.078186   67845 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 07:41:33.078196   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.080902   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.081275   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.081296   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.081564   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.081741   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.081924   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.082088   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.082305   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:33.082518   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:33.082529   67845 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 07:41:33.192053   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:41:33.192079   67845 main.go:141] libmachine: Detecting the provisioner...
	I0315 07:41:33.192090   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.195277   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.195704   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.195746   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.195906   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.196193   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.196385   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.196547   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.196732   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:33.196905   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:33.196916   67845 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 07:41:33.313907   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 07:41:33.313991   67845 main.go:141] libmachine: found compatible host: buildroot
	I0315 07:41:33.314001   67845 main.go:141] libmachine: Provisioning with buildroot...
	I0315 07:41:33.314009   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetMachineName
	I0315 07:41:33.314271   67845 buildroot.go:166] provisioning hostname "enable-default-cni-636355"
	I0315 07:41:33.314319   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetMachineName
	I0315 07:41:33.314553   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.317437   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.317782   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.317824   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.317918   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.318122   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.318271   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.318454   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.318682   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:33.318838   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:33.318849   67845 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-636355 && echo "enable-default-cni-636355" | sudo tee /etc/hostname
	I0315 07:41:33.447281   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-636355
	
	I0315 07:41:33.447317   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.450952   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.451435   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.451459   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.451683   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.451846   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.452065   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.452250   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.452407   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:33.452612   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:33.452632   67845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-636355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-636355/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-636355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:41:33.575519   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:41:33.575564   67845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:41:33.575621   67845 buildroot.go:174] setting up certificates
	I0315 07:41:33.575633   67845 provision.go:84] configureAuth start
	I0315 07:41:33.575649   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetMachineName
	I0315 07:41:33.575949   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetIP
	I0315 07:41:33.578838   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.579238   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.579274   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.579382   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.581761   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.582121   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.582141   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.582303   67845 provision.go:143] copyHostCerts
	I0315 07:41:33.582364   67845 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:41:33.582376   67845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:41:33.582441   67845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:41:33.582566   67845 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:41:33.582580   67845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:41:33.582610   67845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:41:33.582681   67845 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:41:33.582693   67845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:41:33.582720   67845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:41:33.582903   67845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-636355 san=[127.0.0.1 192.168.50.153 enable-default-cni-636355 localhost minikube]
	I0315 07:41:33.667532   67845 provision.go:177] copyRemoteCerts
	I0315 07:41:33.667601   67845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:41:33.667624   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.671043   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.671574   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.671615   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.671845   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.672030   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.672214   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.672388   67845 sshutil.go:53] new ssh client: &{IP:192.168.50.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa Username:docker}
	I0315 07:41:33.759548   67845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:41:33.790454   67845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0315 07:41:33.820184   67845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:41:33.852350   67845 provision.go:87] duration metric: took 276.700677ms to configureAuth
	I0315 07:41:33.852385   67845 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:41:33.852636   67845 config.go:182] Loaded profile config "enable-default-cni-636355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:41:33.852746   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:33.856140   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.856488   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:33.856530   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:33.856891   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:33.857096   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.857270   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:33.857446   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:33.857645   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:33.857860   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:33.857881   67845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:41:30.329217   67253 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 07:41:30.329278   67253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0315 07:41:30.346446   67253 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0315 07:41:30.346486   67253 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0315 07:41:30.421819   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 07:41:31.755557   67253 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.333666667s)
	I0315 07:41:31.755620   67253 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:41:31.755719   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:31.755767   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-636355 minikube.k8s.io/updated_at=2024_03_15T07_41_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=custom-flannel-636355 minikube.k8s.io/primary=true
	I0315 07:41:31.791774   67253 ops.go:34] apiserver oom_adj: -16
	I0315 07:41:31.931232   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:32.431949   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:32.932194   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:33.432259   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:33.931641   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:34.163530   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:41:34.163562   67845 main.go:141] libmachine: Checking connection to Docker...
	I0315 07:41:34.163573   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetURL
	I0315 07:41:34.164914   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | Using libvirt version 6000000
	I0315 07:41:34.167404   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.167718   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.167746   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.167954   67845 main.go:141] libmachine: Docker is up and running!
	I0315 07:41:34.167968   67845 main.go:141] libmachine: Reticulating splines...
	I0315 07:41:34.167974   67845 client.go:171] duration metric: took 24.404903042s to LocalClient.Create
	I0315 07:41:34.167995   67845 start.go:167] duration metric: took 24.404959356s to libmachine.API.Create "enable-default-cni-636355"
	I0315 07:41:34.168005   67845 start.go:293] postStartSetup for "enable-default-cni-636355" (driver="kvm2")
	I0315 07:41:34.168014   67845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:41:34.168029   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:34.168270   67845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:41:34.168296   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:34.170516   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.170850   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.170877   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.171052   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:34.171305   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:34.171478   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:34.171645   67845 sshutil.go:53] new ssh client: &{IP:192.168.50.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa Username:docker}
	I0315 07:41:34.255541   67845 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:41:34.260515   67845 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:41:34.260541   67845 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:41:34.260609   67845 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:41:34.260677   67845 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:41:34.260757   67845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:41:34.270404   67845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:41:34.298065   67845 start.go:296] duration metric: took 130.046895ms for postStartSetup
	I0315 07:41:34.298115   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetConfigRaw
	I0315 07:41:34.298678   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetIP
	I0315 07:41:34.301638   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.302038   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.302072   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.302296   67845 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/enable-default-cni-636355/config.json ...
	I0315 07:41:34.302525   67845 start.go:128] duration metric: took 24.560233573s to createHost
	I0315 07:41:34.302565   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:34.305069   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.305417   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.305448   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.305630   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:34.305865   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:34.306029   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:34.306192   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:34.306366   67845 main.go:141] libmachine: Using SSH client type: native
	I0315 07:41:34.306569   67845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.153 22 <nil> <nil>}
	I0315 07:41:34.306583   67845 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:41:34.417463   67845 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710488494.393692831
	
	I0315 07:41:34.417485   67845 fix.go:216] guest clock: 1710488494.393692831
	I0315 07:41:34.417494   67845 fix.go:229] Guest: 2024-03-15 07:41:34.393692831 +0000 UTC Remote: 2024-03-15 07:41:34.302554838 +0000 UTC m=+50.315444217 (delta=91.137993ms)
	I0315 07:41:34.417517   67845 fix.go:200] guest clock delta is within tolerance: 91.137993ms
	I0315 07:41:34.417523   67845 start.go:83] releasing machines lock for "enable-default-cni-636355", held for 24.675406205s
	I0315 07:41:34.417544   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:34.417878   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetIP
	I0315 07:41:34.420650   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.421033   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.421053   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.421241   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:34.421742   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:34.421902   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .DriverName
	I0315 07:41:34.421980   67845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:41:34.422026   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:34.422086   67845 ssh_runner.go:195] Run: cat /version.json
	I0315 07:41:34.422105   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHHostname
	I0315 07:41:34.424845   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.424987   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.425232   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.425266   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.425445   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:34.425472   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:34.425476   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:34.425668   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHPort
	I0315 07:41:34.425674   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:34.425828   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHKeyPath
	I0315 07:41:34.425845   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:34.425969   67845 sshutil.go:53] new ssh client: &{IP:192.168.50.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa Username:docker}
	I0315 07:41:34.426035   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetSSHUsername
	I0315 07:41:34.426176   67845 sshutil.go:53] new ssh client: &{IP:192.168.50.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/enable-default-cni-636355/id_rsa Username:docker}
	I0315 07:41:34.506119   67845 ssh_runner.go:195] Run: systemctl --version
	I0315 07:41:34.546030   67845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:41:34.721792   67845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:41:34.730352   67845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:41:34.730438   67845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:41:34.757509   67845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:41:34.757549   67845 start.go:494] detecting cgroup driver to use...
	I0315 07:41:34.757619   67845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:41:34.781370   67845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:41:34.797418   67845 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:41:34.797480   67845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:41:34.813589   67845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:41:34.828972   67845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:41:34.971413   67845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:41:35.154040   67845 docker.go:233] disabling docker service ...
	I0315 07:41:35.154169   67845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:41:35.169838   67845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:41:35.186292   67845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:41:35.345238   67845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:41:35.495815   67845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:41:35.513777   67845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:41:35.536775   67845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:41:35.536837   67845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:35.549628   67845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:41:35.549701   67845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:35.563022   67845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:35.576677   67845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:41:35.589928   67845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:41:35.602963   67845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:41:35.613690   67845 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:41:35.613747   67845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:41:35.629066   67845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:41:35.642421   67845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:41:35.784604   67845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:41:35.946294   67845 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:41:35.946358   67845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:41:35.951589   67845 start.go:562] Will wait 60s for crictl version
	I0315 07:41:35.951653   67845 ssh_runner.go:195] Run: which crictl
	I0315 07:41:35.956248   67845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:41:35.995060   67845 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:41:35.995159   67845 ssh_runner.go:195] Run: crio --version
	I0315 07:41:36.032584   67845 ssh_runner.go:195] Run: crio --version
	I0315 07:41:36.066539   67845 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:41:35.303914   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:37.798007   64861 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-8kjdw" in "kube-system" namespace has status "Ready":"False"
	I0315 07:41:36.067860   67845 main.go:141] libmachine: (enable-default-cni-636355) Calling .GetIP
	I0315 07:41:36.070535   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:36.070967   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:9d:19", ip: ""} in network mk-enable-default-cni-636355: {Iface:virbr2 ExpiryTime:2024-03-15 08:41:27 +0000 UTC Type:0 Mac:52:54:00:95:9d:19 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:enable-default-cni-636355 Clientid:01:52:54:00:95:9d:19}
	I0315 07:41:36.071014   67845 main.go:141] libmachine: (enable-default-cni-636355) DBG | domain enable-default-cni-636355 has defined IP address 192.168.50.153 and MAC address 52:54:00:95:9d:19 in network mk-enable-default-cni-636355
	I0315 07:41:36.071233   67845 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:41:36.076270   67845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:41:36.090624   67845 kubeadm.go:877] updating cluster {Name:enable-default-cni-636355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:enable-default-cni-636355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:41:36.090762   67845 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:41:36.090822   67845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:41:36.130420   67845 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:41:36.130503   67845 ssh_runner.go:195] Run: which lz4
	I0315 07:41:36.134915   67845 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:41:36.139779   67845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:41:36.139819   67845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:41:37.922394   67845 crio.go:444] duration metric: took 1.78752755s to copy over tarball
	I0315 07:41:37.922472   67845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:41:34.432130   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:34.931992   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:35.431339   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:35.932005   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:36.431378   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:36.931556   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:37.431551   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:37.931809   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:38.431289   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:41:38.932180   67253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.016232902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488504016190148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd08d5a0-95f2-4c16-8a7a-17f635e8c5e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.016964446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa038d64-1c05-4e4d-a1a6-ac6d2fc2305b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.017020553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa038d64-1c05-4e4d-a1a6-ac6d2fc2305b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.017201894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa038d64-1c05-4e4d-a1a6-ac6d2fc2305b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.060164901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07417cc3-872e-4700-80e5-66be9f39ad51 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.060235906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07417cc3-872e-4700-80e5-66be9f39ad51 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.061480862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1c82641-8c13-4867-9cdb-ea824c3780f8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.061939821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488504061905576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1c82641-8c13-4867-9cdb-ea824c3780f8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.062665080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ea3bdb-f9e3-4d9b-b3c5-f7cc30bffaf3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.062717091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ea3bdb-f9e3-4d9b-b3c5-f7cc30bffaf3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.062979629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ea3bdb-f9e3-4d9b-b3c5-f7cc30bffaf3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.112166428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8abe7ff8-cc2e-4362-8c38-df8e2d333b93 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.112238485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8abe7ff8-cc2e-4362-8c38-df8e2d333b93 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.113565329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d98e4ef2-266f-4684-ac59-083a0d2f8a8a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.114202213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488504114175187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d98e4ef2-266f-4684-ac59-083a0d2f8a8a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.114801452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ee59031-37de-41d8-89e8-d2a812deba9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.114921676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ee59031-37de-41d8-89e8-d2a812deba9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.115364494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ee59031-37de-41d8-89e8-d2a812deba9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.165402248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dbd300a-4077-463c-8190-2fc7eec0a5be name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.165478142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dbd300a-4077-463c-8190-2fc7eec0a5be name=/runtime.v1.RuntimeService/Version
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.167354255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4a2fbc7-f433-4443-a409-a845a5bae15f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.167737352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488504167665218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4a2fbc7-f433-4443-a409-a845a5bae15f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.168403092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd2a8bfa-de77-4856-979e-1ff14fae3f93 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.168452220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd2a8bfa-de77-4856-979e-1ff14fae3f93 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:41:44 no-preload-184055 crio[692]: time="2024-03-15 07:41:44.168636425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487182828752629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e925fa3d2116863313aee74156af7c6ac99a9556df23af68affd852b95f623,PodSandboxId:d833e77fa86e19b5c8ff795ee907b77190a74ba58865a70480e2e5a6392d5868,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710487160884246346,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 3a25dd22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6,PodSandboxId:406657ad057df0d63587183bb906e205e4fa956209b5dae956122710806606e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710487157489986041,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-tc5zh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cc47f60-adca-4c07-9366-ac2f84274042,},Annotations:map[string]string{io.kubernetes.container.hash: ec6a3416,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9,PodSandboxId:7341ebe223c09101dc217606fbe43a1b94211ad6a8a1c12ef6e9253a0ce01124,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710487151288675845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
d1c5fc1-ba80-48d6-a195-029b3a11abd5,},Annotations:map[string]string{io.kubernetes.container.hash: 84db6c2e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f,PodSandboxId:dd5f1ed1a17b1bfad4f27aeaeaa84bee416868328458ff6fc77af01d9668996e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710487151280321728,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-977jm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e526c5-d0ee-46b7-a357-1e6fe36dcd
9d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f03cb60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535,PodSandboxId:45511ccba735af6aef78ff4599fe37cfdfed968dbdb43dbf527fc8df6d624092,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710487145454607766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e224db2d60a1a3229488541daffbee0c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 82222e44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c,PodSandboxId:586f0e7088ddb06f2861a8e046705fbb8653a39903b809f0f2b439edb9cff2b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710487145367263800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c5a44742012f6daa13ce053ba36
d40,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731,PodSandboxId:93f87d655960b50f610c7d795e73187f388123680b2bcbff6069997f08e09264,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710487145329907899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1697e10a28a06759311796140441c2d,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadec9b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c,PodSandboxId:895dc958fcc2877cd73f11c17c36ac72128b8cd4ff559a85c5c22054eed48db7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710487145253463270,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-184055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed27b31327b3f79a59c48ee59d37d8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd2a8bfa-de77-4856-979e-1ff14fae3f93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1c3aa6c23ece       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       3                   7341ebe223c09       storage-provisioner
	80e925fa3d211       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   d833e77fa86e1       busybox
	3e3a341887d9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   406657ad057df       coredns-76f75df574-tc5zh
	4ba10dcc803b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   7341ebe223c09       storage-provisioner
	ca87ab91e305f       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      22 minutes ago      Running             kube-proxy                1                   dd5f1ed1a17b1       kube-proxy-977jm
	2820074ba55a6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      22 minutes ago      Running             kube-apiserver            1                   45511ccba735a       kube-apiserver-no-preload-184055
	a234f9f8e0d8d       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      22 minutes ago      Running             kube-controller-manager   1                   586f0e7088ddb       kube-controller-manager-no-preload-184055
	1c840a3842d52       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      22 minutes ago      Running             etcd                      1                   93f87d655960b       etcd-no-preload-184055
	461e402c50f1c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      22 minutes ago      Running             kube-scheduler            1                   895dc958fcc28       kube-scheduler-no-preload-184055
	
	
	==> coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36696 - 37930 "HINFO IN 5426171196768362982.3097198221435832737. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010928711s
	
	
	==> describe nodes <==
	Name:               no-preload-184055
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-184055
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=no-preload-184055
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_12_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:12:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-184055
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:41:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:40:04 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:40:04 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:40:04 +0000   Fri, 15 Mar 2024 07:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:40:04 +0000   Fri, 15 Mar 2024 07:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.106
	  Hostname:    no-preload-184055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b57f50a2f704415298aec56860814624
	  System UUID:                b57f50a2-f704-4152-98ae-c56860814624
	  Boot ID:                    875c1d52-cf3e-4250-b823-726e2af71c9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-tc5zh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-184055                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-184055             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-no-preload-184055    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-977jm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-184055             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-gwnxc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-184055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-184055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-184055 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-184055 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-184055 event: Registered Node no-preload-184055 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-184055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-184055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-184055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-184055 event: Registered Node no-preload-184055 in Controller
	
	
	==> dmesg <==
	[Mar15 07:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062428] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.928536] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.644272] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.296383] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.059247] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069343] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.220363] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.128538] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.254777] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[Mar15 07:19] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.066798] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.237176] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +2.967027] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.022702] kauditd_printk_skb: 13 callbacks suppressed
	[  +2.092732] systemd-fstab-generator[1947]: Ignoring "noauto" option for root device
	[  +3.000365] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.326276] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] <==
	{"level":"info","ts":"2024-03-15T07:39:08.183726Z","caller":"traceutil/trace.go:171","msg":"trace[1229730758] compact","detail":"{revision:1326; response_revision:1570; }","duration":"315.385648ms","start":"2024-03-15T07:39:07.868326Z","end":"2024-03-15T07:39:08.183712Z","steps":["trace[1229730758] 'process raft request'  (duration: 69.392192ms)","trace[1229730758] 'check and update compact revision'  (duration: 243.631129ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:39:08.183785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:39:07.868292Z","time spent":"315.48916ms","remote":"127.0.0.1:38356","response type":"/etcdserverpb.KV/Compact","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-03-15T07:39:08.184053Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1326,"took":"1.86308ms","hash":2125037197}
	{"level":"info","ts":"2024-03-15T07:39:08.184128Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2125037197,"revision":1326,"compact-revision":1084}
	{"level":"info","ts":"2024-03-15T07:39:35.224812Z","caller":"traceutil/trace.go:171","msg":"trace[1573492689] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"113.803423ms","start":"2024-03-15T07:39:35.11099Z","end":"2024-03-15T07:39:35.224794Z","steps":["trace[1573492689] 'process raft request'  (duration: 113.552288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:39:35.489599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.510086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T07:39:35.489685Z","caller":"traceutil/trace.go:171","msg":"trace[1788291409] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1592; }","duration":"234.681191ms","start":"2024-03-15T07:39:35.254989Z","end":"2024-03-15T07:39:35.48967Z","steps":["trace[1788291409] 'range keys from in-memory index tree'  (duration: 234.447228ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:39:35.767336Z","caller":"traceutil/trace.go:171","msg":"trace[1370543720] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"211.881381ms","start":"2024-03-15T07:39:35.555435Z","end":"2024-03-15T07:39:35.767317Z","steps":["trace[1370543720] 'process raft request'  (duration: 211.774243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:39:59.681709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.367994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-15T07:39:59.68198Z","caller":"traceutil/trace.go:171","msg":"trace[1127263636] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1612; }","duration":"105.660187ms","start":"2024-03-15T07:39:59.576289Z","end":"2024-03-15T07:39:59.681949Z","steps":["trace[1127263636] 'count revisions from in-memory index tree'  (duration: 105.170873ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:40:47.85573Z","caller":"traceutil/trace.go:171","msg":"trace[771266816] linearizableReadLoop","detail":"{readStateIndex:1959; appliedIndex:1958; }","duration":"132.611016ms","start":"2024-03-15T07:40:47.723086Z","end":"2024-03-15T07:40:47.855697Z","steps":["trace[771266816] 'read index received'  (duration: 132.441806ms)","trace[771266816] 'applied index is now lower than readState.Index'  (duration: 167.852µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:40:47.85608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.907821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-03-15T07:40:47.856162Z","caller":"traceutil/trace.go:171","msg":"trace[1698450105] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1651; }","duration":"133.087038ms","start":"2024-03-15T07:40:47.72306Z","end":"2024-03-15T07:40:47.856147Z","steps":["trace[1698450105] 'agreement among raft nodes before linearized reading'  (duration: 132.836569ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:40:48.269599Z","caller":"traceutil/trace.go:171","msg":"trace[201831178] transaction","detail":"{read_only:false; response_revision:1652; number_of_response:1; }","duration":"408.14422ms","start":"2024-03-15T07:40:47.861441Z","end":"2024-03-15T07:40:48.269585Z","steps":["trace[201831178] 'process raft request'  (duration: 407.752412ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:40:48.270114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:40:47.861426Z","time spent":"408.550296ms","remote":"127.0.0.1:38486","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1650 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-15T07:41:16.896734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.223697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T07:41:16.897072Z","caller":"traceutil/trace.go:171","msg":"trace[167903232] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1675; }","duration":"123.736073ms","start":"2024-03-15T07:41:16.773306Z","end":"2024-03-15T07:41:16.897042Z","steps":["trace[167903232] 'range keys from in-memory index tree'  (duration: 122.1358ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:41:18.647615Z","caller":"traceutil/trace.go:171","msg":"trace[1226120382] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"127.514296ms","start":"2024-03-15T07:41:18.520085Z","end":"2024-03-15T07:41:18.6476Z","steps":["trace[1226120382] 'process raft request'  (duration: 127.363351ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:41:41.121551Z","caller":"traceutil/trace.go:171","msg":"trace[1205849111] linearizableReadLoop","detail":"{readStateIndex:2013; appliedIndex:2012; }","duration":"196.807991ms","start":"2024-03-15T07:41:40.924715Z","end":"2024-03-15T07:41:41.121523Z","steps":["trace[1205849111] 'read index received'  (duration: 196.591324ms)","trace[1205849111] 'applied index is now lower than readState.Index'  (duration: 215.878µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T07:41:41.121739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.010275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-15T07:41:41.121776Z","caller":"traceutil/trace.go:171","msg":"trace[155247686] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1695; }","duration":"197.082359ms","start":"2024-03-15T07:41:40.924682Z","end":"2024-03-15T07:41:41.121764Z","steps":["trace[155247686] 'agreement among raft nodes before linearized reading'  (duration: 196.926006ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T07:41:41.122067Z","caller":"traceutil/trace.go:171","msg":"trace[145616215] transaction","detail":"{read_only:false; response_revision:1695; number_of_response:1; }","duration":"318.449065ms","start":"2024-03-15T07:41:40.803599Z","end":"2024-03-15T07:41:41.122049Z","steps":["trace[145616215] 'process raft request'  (duration: 317.760764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T07:41:41.122307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T07:41:40.803584Z","time spent":"318.614683ms","remote":"127.0.0.1:38486","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1693 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-15T07:41:43.382045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.279894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T07:41:43.382347Z","caller":"traceutil/trace.go:171","msg":"trace[452789818] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1697; }","duration":"127.580082ms","start":"2024-03-15T07:41:43.254747Z","end":"2024-03-15T07:41:43.382327Z","steps":["trace[452789818] 'range keys from in-memory index tree'  (duration: 127.092217ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:41:44 up 23 min,  0 users,  load average: 0.51, 0.33, 0.18
	Linux no-preload-184055 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] <==
	I0315 07:35:10.331692       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:37:10.331800       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:37:10.331940       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:37:10.331953       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:37:10.331812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:37:10.332052       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:37:10.333305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:39:09.335638       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:39:09.336242       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0315 07:39:10.337366       1 handler_proxy.go:93] no RequestInfo found in the context
	W0315 07:39:10.337499       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:39:10.337678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:39:10.337705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0315 07:39:10.337784       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:39:10.339012       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:40:10.338580       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:40:10.338673       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:40:10.338686       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:40:10.339991       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:40:10.340168       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:40:10.340208       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] <==
	I0315 07:35:54.458125       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:23.926944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:24.466153       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:53.932062       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:54.476558       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:23.941521       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:24.485025       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:53.949733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:54.493770       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:38:23.956791       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:38:24.504025       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:38:53.962070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:38:54.513524       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:39:23.968720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:39:24.522903       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:39:53.975028       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:39:54.535757       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:40:23.981860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:40:24.546023       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:40:42.621761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="326.102µs"
	E0315 07:40:53.988271       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:40:54.555695       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:40:54.604481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="109.994µs"
	E0315 07:41:23.999156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:41:24.565511       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] <==
	I0315 07:19:11.941362       1 server_others.go:72] "Using iptables proxy"
	I0315 07:19:12.074013       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.106"]
	I0315 07:19:12.181166       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0315 07:19:12.181243       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:19:12.181264       1 server_others.go:168] "Using iptables Proxier"
	I0315 07:19:12.188037       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:19:12.190344       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0315 07:19:12.190575       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:19:12.191535       1 config.go:188] "Starting service config controller"
	I0315 07:19:12.191609       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:19:12.191645       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:19:12.191662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:19:12.194026       1 config.go:315] "Starting node config controller"
	I0315 07:19:12.194143       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:19:12.292470       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:19:12.292734       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:19:12.294274       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] <==
	I0315 07:19:06.287433       1 serving.go:380] Generated self-signed cert in-memory
	W0315 07:19:09.256584       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:19:09.256754       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:19:09.256873       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:19:09.256906       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:19:09.364554       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0315 07:19:09.364621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:19:09.368916       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 07:19:09.369182       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 07:19:09.369229       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:19:09.369254       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 07:19:09.469398       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:39:13 no-preload-184055 kubelet[1322]: E0315 07:39:13.585921    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:39:28 no-preload-184055 kubelet[1322]: E0315 07:39:28.588345    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:39:39 no-preload-184055 kubelet[1322]: E0315 07:39:39.587098    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:39:50 no-preload-184055 kubelet[1322]: E0315 07:39:50.585658    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:40:02 no-preload-184055 kubelet[1322]: E0315 07:40:02.587073    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:40:04 no-preload-184055 kubelet[1322]: E0315 07:40:04.605711    1322 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:40:04 no-preload-184055 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:40:04 no-preload-184055 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:40:04 no-preload-184055 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:40:04 no-preload-184055 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:40:16 no-preload-184055 kubelet[1322]: E0315 07:40:16.586015    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:40:30 no-preload-184055 kubelet[1322]: E0315 07:40:30.600263    1322 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 15 07:40:30 no-preload-184055 kubelet[1322]: E0315 07:40:30.600778    1322 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 15 07:40:30 no-preload-184055 kubelet[1322]: E0315 07:40:30.601456    1322 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9n8r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-gwnxc_kube-system(abff20ab-2240-4106-b3fc-ffce142e8069): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 15 07:40:30 no-preload-184055 kubelet[1322]: E0315 07:40:30.601739    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:40:42 no-preload-184055 kubelet[1322]: E0315 07:40:42.588609    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:40:54 no-preload-184055 kubelet[1322]: E0315 07:40:54.584972    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:41:04 no-preload-184055 kubelet[1322]: E0315 07:41:04.601403    1322 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:41:04 no-preload-184055 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:41:04 no-preload-184055 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:41:04 no-preload-184055 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:41:04 no-preload-184055 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:41:07 no-preload-184055 kubelet[1322]: E0315 07:41:07.585127    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:41:22 no-preload-184055 kubelet[1322]: E0315 07:41:22.586571    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	Mar 15 07:41:37 no-preload-184055 kubelet[1322]: E0315 07:41:37.586455    1322 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gwnxc" podUID="abff20ab-2240-4106-b3fc-ffce142e8069"
	
	
	==> storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] <==
	I0315 07:19:11.809467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0315 07:19:41.813463       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] <==
	I0315 07:19:42.928615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:19:42.945127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:19:42.945233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:20:00.346321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:20:00.346487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3!
	I0315 07:20:00.347551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb92104d-7794-46fb-a76c-f5edb625cf7c", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3 became leader
	I0315 07:20:00.447722       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-184055_5ac4574e-e6ff-4f85-ab39-d7ff33f229d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184055 -n no-preload-184055
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-184055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-gwnxc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc: exit status 1 (92.627393ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-gwnxc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-184055 describe pod metrics-server-57f55c9bc5-gwnxc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (544.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (303.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709708 -n embed-certs-709708
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-15 07:38:31.13420768 +0000 UTC m=+6156.137916330
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-709708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-709708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.373µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-709708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-709708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-709708 logs -n 25: (1.484539419s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:37 UTC | 15 Mar 24 07:37 UTC |
	| start   | -p newest-cni-027190 --memory=2200 --alsologtostderr   | newest-cni-027190            | jenkins | v1.32.0 | 15 Mar 24 07:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:37:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:37:50.185466   62652 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:37:50.185714   62652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:37:50.185724   62652 out.go:304] Setting ErrFile to fd 2...
	I0315 07:37:50.185731   62652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:37:50.185938   62652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:37:50.186528   62652 out.go:298] Setting JSON to false
	I0315 07:37:50.187453   62652 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8367,"bootTime":1710479904,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:37:50.187524   62652 start.go:139] virtualization: kvm guest
	I0315 07:37:50.189966   62652 out.go:177] * [newest-cni-027190] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:37:50.191646   62652 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:37:50.192990   62652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:37:50.191694   62652 notify.go:220] Checking for updates...
	I0315 07:37:50.195758   62652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:37:50.197003   62652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:37:50.198261   62652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:37:50.199548   62652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:37:50.201765   62652 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:37:50.201854   62652 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:37:50.201941   62652 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:37:50.202029   62652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:37:50.239945   62652 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:37:50.241348   62652 start.go:297] selected driver: kvm2
	I0315 07:37:50.241371   62652 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:37:50.241388   62652 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:37:50.242102   62652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:37:50.242202   62652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:37:50.258632   62652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:37:50.258694   62652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0315 07:37:50.258725   62652 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0315 07:37:50.258996   62652 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0315 07:37:50.259083   62652 cni.go:84] Creating CNI manager for ""
	I0315 07:37:50.259097   62652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:37:50.259110   62652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 07:37:50.259195   62652 start.go:340] cluster config:
	{Name:newest-cni-027190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-027190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:37:50.259322   62652 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:37:50.261404   62652 out.go:177] * Starting "newest-cni-027190" primary control-plane node in "newest-cni-027190" cluster
	I0315 07:37:50.262604   62652 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:37:50.262636   62652 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0315 07:37:50.262645   62652 cache.go:56] Caching tarball of preloaded images
	I0315 07:37:50.262729   62652 preload.go:173] Found /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 07:37:50.262743   62652 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0315 07:37:50.262843   62652 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/config.json ...
	I0315 07:37:50.262862   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/config.json: {Name:mk707991156914aa389c87154b93cd7cc4020973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:37:50.263009   62652 start.go:360] acquireMachinesLock for newest-cni-027190: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:37:50.263061   62652 start.go:364] duration metric: took 34.217µs to acquireMachinesLock for "newest-cni-027190"
	I0315 07:37:50.263084   62652 start.go:93] Provisioning new machine with config: &{Name:newest-cni-027190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-027190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:37:50.263155   62652 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 07:37:50.264707   62652 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 07:37:50.264877   62652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:37:50.264928   62652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:37:50.279971   62652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I0315 07:37:50.280441   62652 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:37:50.281006   62652 main.go:141] libmachine: Using API Version  1
	I0315 07:37:50.281026   62652 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:37:50.281455   62652 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:37:50.281674   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetMachineName
	I0315 07:37:50.281857   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:37:50.282012   62652 start.go:159] libmachine.API.Create for "newest-cni-027190" (driver="kvm2")
	I0315 07:37:50.282043   62652 client.go:168] LocalClient.Create starting
	I0315 07:37:50.282082   62652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem
	I0315 07:37:50.282130   62652 main.go:141] libmachine: Decoding PEM data...
	I0315 07:37:50.282145   62652 main.go:141] libmachine: Parsing certificate...
	I0315 07:37:50.282189   62652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem
	I0315 07:37:50.282212   62652 main.go:141] libmachine: Decoding PEM data...
	I0315 07:37:50.282222   62652 main.go:141] libmachine: Parsing certificate...
	I0315 07:37:50.282242   62652 main.go:141] libmachine: Running pre-create checks...
	I0315 07:37:50.282250   62652 main.go:141] libmachine: (newest-cni-027190) Calling .PreCreateCheck
	I0315 07:37:50.282638   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetConfigRaw
	I0315 07:37:50.283030   62652 main.go:141] libmachine: Creating machine...
	I0315 07:37:50.283044   62652 main.go:141] libmachine: (newest-cni-027190) Calling .Create
	I0315 07:37:50.283189   62652 main.go:141] libmachine: (newest-cni-027190) Creating KVM machine...
	I0315 07:37:50.284736   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found existing default KVM network
	I0315 07:37:50.286151   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.285991   62675 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:9d:f3} reservation:<nil>}
	I0315 07:37:50.286932   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.286844   62675 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:af:c3} reservation:<nil>}
	I0315 07:37:50.288276   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.288203   62675 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289030}
	I0315 07:37:50.288334   62652 main.go:141] libmachine: (newest-cni-027190) DBG | created network xml: 
	I0315 07:37:50.288354   62652 main.go:141] libmachine: (newest-cni-027190) DBG | <network>
	I0315 07:37:50.288365   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   <name>mk-newest-cni-027190</name>
	I0315 07:37:50.288374   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   <dns enable='no'/>
	I0315 07:37:50.288380   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   
	I0315 07:37:50.288392   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0315 07:37:50.288406   62652 main.go:141] libmachine: (newest-cni-027190) DBG |     <dhcp>
	I0315 07:37:50.288432   62652 main.go:141] libmachine: (newest-cni-027190) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0315 07:37:50.288447   62652 main.go:141] libmachine: (newest-cni-027190) DBG |     </dhcp>
	I0315 07:37:50.288454   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   </ip>
	I0315 07:37:50.288477   62652 main.go:141] libmachine: (newest-cni-027190) DBG |   
	I0315 07:37:50.288492   62652 main.go:141] libmachine: (newest-cni-027190) DBG | </network>
	I0315 07:37:50.288504   62652 main.go:141] libmachine: (newest-cni-027190) DBG | 
	I0315 07:37:50.294360   62652 main.go:141] libmachine: (newest-cni-027190) DBG | trying to create private KVM network mk-newest-cni-027190 192.168.61.0/24...
	I0315 07:37:50.371047   62652 main.go:141] libmachine: (newest-cni-027190) DBG | private KVM network mk-newest-cni-027190 192.168.61.0/24 created
	I0315 07:37:50.371081   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.370992   62675 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:37:50.371094   62652 main.go:141] libmachine: (newest-cni-027190) Setting up store path in /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190 ...
	I0315 07:37:50.371126   62652 main.go:141] libmachine: (newest-cni-027190) Building disk image from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 07:37:50.371228   62652 main.go:141] libmachine: (newest-cni-027190) Downloading /home/jenkins/minikube-integration/18213-8825/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso...
	I0315 07:37:50.595127   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.594965   62675 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa...
	I0315 07:37:50.694028   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.693911   62675 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/newest-cni-027190.rawdisk...
	I0315 07:37:50.694062   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Writing magic tar header
	I0315 07:37:50.694079   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Writing SSH key tar header
	I0315 07:37:50.694091   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:50.694025   62675 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190 ...
	I0315 07:37:50.694129   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190
	I0315 07:37:50.694175   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190 (perms=drwx------)
	I0315 07:37:50.694196   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube/machines
	I0315 07:37:50.694207   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube/machines (perms=drwxr-xr-x)
	I0315 07:37:50.694232   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825/.minikube (perms=drwxr-xr-x)
	I0315 07:37:50.694241   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins/minikube-integration/18213-8825 (perms=drwxrwxr-x)
	I0315 07:37:50.694248   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:37:50.694254   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 07:37:50.694265   62652 main.go:141] libmachine: (newest-cni-027190) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 07:37:50.694282   62652 main.go:141] libmachine: (newest-cni-027190) Creating domain...
	I0315 07:37:50.694305   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18213-8825
	I0315 07:37:50.694316   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 07:37:50.694324   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home/jenkins
	I0315 07:37:50.694329   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Checking permissions on dir: /home
	I0315 07:37:50.694337   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Skipping /home - not owner
	I0315 07:37:50.695573   62652 main.go:141] libmachine: (newest-cni-027190) define libvirt domain using xml: 
	I0315 07:37:50.695590   62652 main.go:141] libmachine: (newest-cni-027190) <domain type='kvm'>
	I0315 07:37:50.695598   62652 main.go:141] libmachine: (newest-cni-027190)   <name>newest-cni-027190</name>
	I0315 07:37:50.695603   62652 main.go:141] libmachine: (newest-cni-027190)   <memory unit='MiB'>2200</memory>
	I0315 07:37:50.695608   62652 main.go:141] libmachine: (newest-cni-027190)   <vcpu>2</vcpu>
	I0315 07:37:50.695624   62652 main.go:141] libmachine: (newest-cni-027190)   <features>
	I0315 07:37:50.695635   62652 main.go:141] libmachine: (newest-cni-027190)     <acpi/>
	I0315 07:37:50.695651   62652 main.go:141] libmachine: (newest-cni-027190)     <apic/>
	I0315 07:37:50.695661   62652 main.go:141] libmachine: (newest-cni-027190)     <pae/>
	I0315 07:37:50.695666   62652 main.go:141] libmachine: (newest-cni-027190)     
	I0315 07:37:50.695673   62652 main.go:141] libmachine: (newest-cni-027190)   </features>
	I0315 07:37:50.695679   62652 main.go:141] libmachine: (newest-cni-027190)   <cpu mode='host-passthrough'>
	I0315 07:37:50.695686   62652 main.go:141] libmachine: (newest-cni-027190)   
	I0315 07:37:50.695693   62652 main.go:141] libmachine: (newest-cni-027190)   </cpu>
	I0315 07:37:50.695698   62652 main.go:141] libmachine: (newest-cni-027190)   <os>
	I0315 07:37:50.695705   62652 main.go:141] libmachine: (newest-cni-027190)     <type>hvm</type>
	I0315 07:37:50.695712   62652 main.go:141] libmachine: (newest-cni-027190)     <boot dev='cdrom'/>
	I0315 07:37:50.695722   62652 main.go:141] libmachine: (newest-cni-027190)     <boot dev='hd'/>
	I0315 07:37:50.695758   62652 main.go:141] libmachine: (newest-cni-027190)     <bootmenu enable='no'/>
	I0315 07:37:50.695788   62652 main.go:141] libmachine: (newest-cni-027190)   </os>
	I0315 07:37:50.695804   62652 main.go:141] libmachine: (newest-cni-027190)   <devices>
	I0315 07:37:50.695816   62652 main.go:141] libmachine: (newest-cni-027190)     <disk type='file' device='cdrom'>
	I0315 07:37:50.695834   62652 main.go:141] libmachine: (newest-cni-027190)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/boot2docker.iso'/>
	I0315 07:37:50.695845   62652 main.go:141] libmachine: (newest-cni-027190)       <target dev='hdc' bus='scsi'/>
	I0315 07:37:50.695856   62652 main.go:141] libmachine: (newest-cni-027190)       <readonly/>
	I0315 07:37:50.695869   62652 main.go:141] libmachine: (newest-cni-027190)     </disk>
	I0315 07:37:50.695880   62652 main.go:141] libmachine: (newest-cni-027190)     <disk type='file' device='disk'>
	I0315 07:37:50.695891   62652 main.go:141] libmachine: (newest-cni-027190)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 07:37:50.695905   62652 main.go:141] libmachine: (newest-cni-027190)       <source file='/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/newest-cni-027190.rawdisk'/>
	I0315 07:37:50.695934   62652 main.go:141] libmachine: (newest-cni-027190)       <target dev='hda' bus='virtio'/>
	I0315 07:37:50.695946   62652 main.go:141] libmachine: (newest-cni-027190)     </disk>
	I0315 07:37:50.695957   62652 main.go:141] libmachine: (newest-cni-027190)     <interface type='network'>
	I0315 07:37:50.695973   62652 main.go:141] libmachine: (newest-cni-027190)       <source network='mk-newest-cni-027190'/>
	I0315 07:37:50.695985   62652 main.go:141] libmachine: (newest-cni-027190)       <model type='virtio'/>
	I0315 07:37:50.695995   62652 main.go:141] libmachine: (newest-cni-027190)     </interface>
	I0315 07:37:50.696003   62652 main.go:141] libmachine: (newest-cni-027190)     <interface type='network'>
	I0315 07:37:50.696011   62652 main.go:141] libmachine: (newest-cni-027190)       <source network='default'/>
	I0315 07:37:50.696021   62652 main.go:141] libmachine: (newest-cni-027190)       <model type='virtio'/>
	I0315 07:37:50.696029   62652 main.go:141] libmachine: (newest-cni-027190)     </interface>
	I0315 07:37:50.696034   62652 main.go:141] libmachine: (newest-cni-027190)     <serial type='pty'>
	I0315 07:37:50.696041   62652 main.go:141] libmachine: (newest-cni-027190)       <target port='0'/>
	I0315 07:37:50.696047   62652 main.go:141] libmachine: (newest-cni-027190)     </serial>
	I0315 07:37:50.696057   62652 main.go:141] libmachine: (newest-cni-027190)     <console type='pty'>
	I0315 07:37:50.696063   62652 main.go:141] libmachine: (newest-cni-027190)       <target type='serial' port='0'/>
	I0315 07:37:50.696074   62652 main.go:141] libmachine: (newest-cni-027190)     </console>
	I0315 07:37:50.696080   62652 main.go:141] libmachine: (newest-cni-027190)     <rng model='virtio'>
	I0315 07:37:50.696100   62652 main.go:141] libmachine: (newest-cni-027190)       <backend model='random'>/dev/random</backend>
	I0315 07:37:50.696112   62652 main.go:141] libmachine: (newest-cni-027190)     </rng>
	I0315 07:37:50.696120   62652 main.go:141] libmachine: (newest-cni-027190)     
	I0315 07:37:50.696128   62652 main.go:141] libmachine: (newest-cni-027190)     
	I0315 07:37:50.696137   62652 main.go:141] libmachine: (newest-cni-027190)   </devices>
	I0315 07:37:50.696144   62652 main.go:141] libmachine: (newest-cni-027190) </domain>
	I0315 07:37:50.696157   62652 main.go:141] libmachine: (newest-cni-027190) 
	I0315 07:37:50.701265   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:d8:ce:ec in network default
	I0315 07:37:50.703352   62652 main.go:141] libmachine: (newest-cni-027190) Ensuring networks are active...
	I0315 07:37:50.703396   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:50.704189   62652 main.go:141] libmachine: (newest-cni-027190) Ensuring network default is active
	I0315 07:37:50.704616   62652 main.go:141] libmachine: (newest-cni-027190) Ensuring network mk-newest-cni-027190 is active
	I0315 07:37:50.705258   62652 main.go:141] libmachine: (newest-cni-027190) Getting domain xml...
	I0315 07:37:50.706100   62652 main.go:141] libmachine: (newest-cni-027190) Creating domain...
	I0315 07:37:51.973104   62652 main.go:141] libmachine: (newest-cni-027190) Waiting to get IP...
	I0315 07:37:51.973946   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:51.974412   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:51.974458   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:51.974399   62675 retry.go:31] will retry after 250.232065ms: waiting for machine to come up
	I0315 07:37:52.226905   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:52.227335   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:52.227364   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:52.227285   62675 retry.go:31] will retry after 287.743416ms: waiting for machine to come up
	I0315 07:37:52.516976   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:52.517645   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:52.517675   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:52.517572   62675 retry.go:31] will retry after 398.14141ms: waiting for machine to come up
	I0315 07:37:52.917052   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:52.917544   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:52.917572   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:52.917504   62675 retry.go:31] will retry after 375.593838ms: waiting for machine to come up
	I0315 07:37:53.295473   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:53.296083   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:53.296110   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:53.296001   62675 retry.go:31] will retry after 593.329094ms: waiting for machine to come up
	I0315 07:37:53.890838   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:53.891412   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:53.891472   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:53.891380   62675 retry.go:31] will retry after 805.318405ms: waiting for machine to come up
	I0315 07:37:54.698027   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:54.698485   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:54.698514   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:54.698423   62675 retry.go:31] will retry after 1.123655659s: waiting for machine to come up
	I0315 07:37:55.823955   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:55.824482   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:55.824513   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:55.824427   62675 retry.go:31] will retry after 965.430022ms: waiting for machine to come up
	I0315 07:37:56.791610   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:56.792097   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:56.792126   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:56.792046   62675 retry.go:31] will retry after 1.620600033s: waiting for machine to come up
	I0315 07:37:58.414023   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:37:58.414496   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:37:58.414526   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:37:58.414453   62675 retry.go:31] will retry after 2.151392073s: waiting for machine to come up
	I0315 07:38:00.567777   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:00.568451   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:38:00.568484   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:38:00.568402   62675 retry.go:31] will retry after 2.476319023s: waiting for machine to come up
	I0315 07:38:03.046442   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:03.047005   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:38:03.047036   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:38:03.046961   62675 retry.go:31] will retry after 3.019523008s: waiting for machine to come up
	I0315 07:38:06.068612   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:06.069124   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:38:06.069150   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:38:06.069079   62675 retry.go:31] will retry after 3.021900017s: waiting for machine to come up
	I0315 07:38:09.093851   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:09.094402   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find current IP address of domain newest-cni-027190 in network mk-newest-cni-027190
	I0315 07:38:09.094436   62652 main.go:141] libmachine: (newest-cni-027190) DBG | I0315 07:38:09.094348   62675 retry.go:31] will retry after 3.662705775s: waiting for machine to come up
	I0315 07:38:12.760418   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.760997   62652 main.go:141] libmachine: (newest-cni-027190) Found IP for machine: 192.168.61.229
	I0315 07:38:12.761022   62652 main.go:141] libmachine: (newest-cni-027190) Reserving static IP address...
	I0315 07:38:12.761066   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has current primary IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.761478   62652 main.go:141] libmachine: (newest-cni-027190) DBG | unable to find host DHCP lease matching {name: "newest-cni-027190", mac: "52:54:00:9e:10:d7", ip: "192.168.61.229"} in network mk-newest-cni-027190
	I0315 07:38:12.852901   62652 main.go:141] libmachine: (newest-cni-027190) Reserved static IP address: 192.168.61.229
	I0315 07:38:12.852931   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Getting to WaitForSSH function...
	I0315 07:38:12.852939   62652 main.go:141] libmachine: (newest-cni-027190) Waiting for SSH to be available...
	I0315 07:38:12.857171   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.857706   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:12.857741   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.857919   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Using SSH client type: external
	I0315 07:38:12.857947   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa (-rw-------)
	I0315 07:38:12.857976   62652 main.go:141] libmachine: (newest-cni-027190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:38:12.857989   62652 main.go:141] libmachine: (newest-cni-027190) DBG | About to run SSH command:
	I0315 07:38:12.858006   62652 main.go:141] libmachine: (newest-cni-027190) DBG | exit 0
	I0315 07:38:12.984946   62652 main.go:141] libmachine: (newest-cni-027190) DBG | SSH cmd err, output: <nil>: 
	I0315 07:38:12.985258   62652 main.go:141] libmachine: (newest-cni-027190) KVM machine creation complete!
	I0315 07:38:12.985598   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetConfigRaw
	I0315 07:38:12.986118   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:12.986384   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:12.986570   62652 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 07:38:12.986588   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetState
	I0315 07:38:12.988033   62652 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 07:38:12.988048   62652 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 07:38:12.988054   62652 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 07:38:12.988060   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:12.991401   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.991804   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:12.991844   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:12.991980   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:12.992212   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:12.992380   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:12.992556   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:12.992739   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:12.992950   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:12.992966   62652 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 07:38:13.096573   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:38:13.096606   62652 main.go:141] libmachine: Detecting the provisioner...
	I0315 07:38:13.096617   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.100501   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.101133   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.101172   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.101407   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:13.101739   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.101977   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.102293   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:13.102594   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:13.102806   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:13.102824   62652 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 07:38:13.210256   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 07:38:13.210350   62652 main.go:141] libmachine: found compatible host: buildroot
	I0315 07:38:13.210360   62652 main.go:141] libmachine: Provisioning with buildroot...
	I0315 07:38:13.210371   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetMachineName
	I0315 07:38:13.210690   62652 buildroot.go:166] provisioning hostname "newest-cni-027190"
	I0315 07:38:13.210717   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetMachineName
	I0315 07:38:13.210933   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.214003   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.214379   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.214413   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.214789   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:13.215028   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.215218   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.215431   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:13.215650   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:13.215827   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:13.215840   62652 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-027190 && echo "newest-cni-027190" | sudo tee /etc/hostname
	I0315 07:38:13.335052   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-027190
	
	I0315 07:38:13.335181   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.339283   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.339994   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.340026   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.340345   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:13.340630   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.340969   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.341229   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:13.341545   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:13.341780   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:13.341807   62652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-027190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-027190/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-027190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:38:13.457617   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:38:13.457646   62652 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:38:13.457693   62652 buildroot.go:174] setting up certificates
	I0315 07:38:13.457708   62652 provision.go:84] configureAuth start
	I0315 07:38:13.457718   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetMachineName
	I0315 07:38:13.458053   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetIP
	I0315 07:38:13.461610   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.462370   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.462407   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.462571   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.465372   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.465735   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.465764   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.465966   62652 provision.go:143] copyHostCerts
	I0315 07:38:13.466033   62652 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:38:13.466047   62652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:38:13.466304   62652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:38:13.466474   62652 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:38:13.466488   62652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:38:13.466535   62652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:38:13.466587   62652 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:38:13.466594   62652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:38:13.466621   62652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:38:13.466666   62652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.newest-cni-027190 san=[127.0.0.1 192.168.61.229 localhost minikube newest-cni-027190]
	I0315 07:38:13.569009   62652 provision.go:177] copyRemoteCerts
	I0315 07:38:13.569067   62652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:38:13.569090   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.572435   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.572811   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.572841   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.573037   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:13.573349   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.573599   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:13.573862   62652 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa Username:docker}
	I0315 07:38:13.658102   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:38:13.688776   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:38:13.723675   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:38:13.757949   62652 provision.go:87] duration metric: took 300.227147ms to configureAuth
	I0315 07:38:13.757989   62652 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:38:13.758254   62652 config.go:182] Loaded profile config "newest-cni-027190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:38:13.758354   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:13.761898   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.762318   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:13.762349   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:13.762617   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:13.762839   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.763022   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:13.763165   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:13.763362   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:13.763607   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:13.763629   62652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:38:14.063641   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:38:14.063669   62652 main.go:141] libmachine: Checking connection to Docker...
	I0315 07:38:14.063680   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetURL
	I0315 07:38:14.065123   62652 main.go:141] libmachine: (newest-cni-027190) DBG | Using libvirt version 6000000
	I0315 07:38:14.067481   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.067968   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.068006   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.068236   62652 main.go:141] libmachine: Docker is up and running!
	I0315 07:38:14.068255   62652 main.go:141] libmachine: Reticulating splines...
	I0315 07:38:14.068263   62652 client.go:171] duration metric: took 23.786208727s to LocalClient.Create
	I0315 07:38:14.068290   62652 start.go:167] duration metric: took 23.786278082s to libmachine.API.Create "newest-cni-027190"
	I0315 07:38:14.068311   62652 start.go:293] postStartSetup for "newest-cni-027190" (driver="kvm2")
	I0315 07:38:14.068329   62652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:38:14.068354   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:14.068634   62652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:38:14.068680   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:14.071292   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.071701   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.071750   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.071898   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:14.072111   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:14.072330   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:14.072521   62652 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa Username:docker}
	I0315 07:38:14.151716   62652 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:38:14.156377   62652 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:38:14.156405   62652 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:38:14.156494   62652 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:38:14.156584   62652 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:38:14.156699   62652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:38:14.167049   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:38:14.196774   62652 start.go:296] duration metric: took 128.447942ms for postStartSetup
	I0315 07:38:14.196852   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetConfigRaw
	I0315 07:38:14.197644   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetIP
	I0315 07:38:14.200506   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.200901   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.200945   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.201242   62652 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/config.json ...
	I0315 07:38:14.201479   62652 start.go:128] duration metric: took 23.938311474s to createHost
	I0315 07:38:14.201508   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:14.204105   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.204584   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.204616   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.204826   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:14.205002   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:14.205113   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:14.205217   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:14.205436   62652 main.go:141] libmachine: Using SSH client type: native
	I0315 07:38:14.205590   62652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0315 07:38:14.205602   62652 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:38:14.305661   62652 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710488294.286947030
	
	I0315 07:38:14.305687   62652 fix.go:216] guest clock: 1710488294.286947030
	I0315 07:38:14.305698   62652 fix.go:229] Guest: 2024-03-15 07:38:14.28694703 +0000 UTC Remote: 2024-03-15 07:38:14.201494032 +0000 UTC m=+24.067192568 (delta=85.452998ms)
	I0315 07:38:14.305722   62652 fix.go:200] guest clock delta is within tolerance: 85.452998ms
	I0315 07:38:14.305728   62652 start.go:83] releasing machines lock for "newest-cni-027190", held for 24.042655571s
	I0315 07:38:14.305751   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:14.306011   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetIP
	I0315 07:38:14.308849   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.309248   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.309280   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.309469   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:14.310005   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:14.310228   62652 main.go:141] libmachine: (newest-cni-027190) Calling .DriverName
	I0315 07:38:14.310337   62652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:38:14.310387   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:14.310480   62652 ssh_runner.go:195] Run: cat /version.json
	I0315 07:38:14.310506   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHHostname
	I0315 07:38:14.313409   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.313724   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.313753   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.313925   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:14.313955   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.314104   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:14.314299   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:14.314339   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:14.314364   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:14.314473   62652 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa Username:docker}
	I0315 07:38:14.314487   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHPort
	I0315 07:38:14.314667   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHKeyPath
	I0315 07:38:14.314809   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetSSHUsername
	I0315 07:38:14.314938   62652 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/newest-cni-027190/id_rsa Username:docker}
	I0315 07:38:14.431055   62652 ssh_runner.go:195] Run: systemctl --version
	I0315 07:38:14.439065   62652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:38:14.616724   62652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:38:14.623733   62652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:38:14.623791   62652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:38:14.642161   62652 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:38:14.642191   62652 start.go:494] detecting cgroup driver to use...
	I0315 07:38:14.642258   62652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:38:14.663231   62652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:38:14.680321   62652 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:38:14.680412   62652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:38:14.695822   62652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:38:14.711606   62652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:38:14.842182   62652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:38:14.991309   62652 docker.go:233] disabling docker service ...
	I0315 07:38:14.991377   62652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:38:15.007660   62652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:38:15.023468   62652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:38:15.189260   62652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:38:15.341799   62652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:38:15.358736   62652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:38:15.379892   62652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:38:15.379949   62652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:38:15.391800   62652 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:38:15.391863   62652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:38:15.404024   62652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:38:15.418087   62652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:38:15.432279   62652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:38:15.445480   62652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:38:15.456284   62652 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:38:15.456334   62652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:38:15.472812   62652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:38:15.484045   62652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:38:15.618071   62652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:38:15.779983   62652 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:38:15.780080   62652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:38:15.786277   62652 start.go:562] Will wait 60s for crictl version
	I0315 07:38:15.786344   62652 ssh_runner.go:195] Run: which crictl
	I0315 07:38:15.791068   62652 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:38:15.842669   62652 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:38:15.842887   62652 ssh_runner.go:195] Run: crio --version
	I0315 07:38:15.877726   62652 ssh_runner.go:195] Run: crio --version
	I0315 07:38:15.914096   62652 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:38:15.915721   62652 main.go:141] libmachine: (newest-cni-027190) Calling .GetIP
	I0315 07:38:15.918451   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:15.918896   62652 main.go:141] libmachine: (newest-cni-027190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:d7", ip: ""} in network mk-newest-cni-027190: {Iface:virbr3 ExpiryTime:2024-03-15 08:38:05 +0000 UTC Type:0 Mac:52:54:00:9e:10:d7 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:newest-cni-027190 Clientid:01:52:54:00:9e:10:d7}
	I0315 07:38:15.918925   62652 main.go:141] libmachine: (newest-cni-027190) DBG | domain newest-cni-027190 has defined IP address 192.168.61.229 and MAC address 52:54:00:9e:10:d7 in network mk-newest-cni-027190
	I0315 07:38:15.919134   62652 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:38:15.923685   62652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:38:15.940366   62652 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0315 07:38:15.942309   62652 kubeadm.go:877] updating cluster {Name:newest-cni-027190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-027190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:38:15.942431   62652 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:38:15.942491   62652 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:38:15.983071   62652 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:38:15.983133   62652 ssh_runner.go:195] Run: which lz4
	I0315 07:38:15.987570   62652 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:38:15.992550   62652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:38:15.992612   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0315 07:38:17.691717   62652 crio.go:444] duration metric: took 1.704173411s to copy over tarball
	I0315 07:38:17.691826   62652 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:38:20.160672   62652 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.468814626s)
	I0315 07:38:20.160696   62652 crio.go:451] duration metric: took 2.468947734s to extract the tarball
	I0315 07:38:20.160703   62652 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:38:20.202066   62652 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:38:20.251530   62652 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:38:20.251554   62652 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:38:20.251561   62652 kubeadm.go:928] updating node { 192.168.61.229 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:38:20.251650   62652 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-027190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-027190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:38:20.251716   62652 ssh_runner.go:195] Run: crio config
	I0315 07:38:20.303360   62652 cni.go:84] Creating CNI manager for ""
	I0315 07:38:20.303387   62652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:38:20.303402   62652 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0315 07:38:20.303427   62652 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.229 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-027190 NodeName:newest-cni-027190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:38:20.303615   62652 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-027190"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:38:20.303704   62652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:38:20.315943   62652 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:38:20.316014   62652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:38:20.329122   62652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0315 07:38:20.349303   62652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:38:20.369249   62652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0315 07:38:20.388952   62652 ssh_runner.go:195] Run: grep 192.168.61.229	control-plane.minikube.internal$ /etc/hosts
	I0315 07:38:20.393805   62652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:38:20.411374   62652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:38:20.545923   62652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:38:20.567267   62652 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190 for IP: 192.168.61.229
	I0315 07:38:20.567293   62652 certs.go:194] generating shared ca certs ...
	I0315 07:38:20.567308   62652 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:20.567519   62652 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:38:20.567584   62652 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:38:20.567599   62652 certs.go:256] generating profile certs ...
	I0315 07:38:20.567682   62652 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.key
	I0315 07:38:20.567699   62652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.crt with IP's: []
	I0315 07:38:20.917708   62652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.crt ...
	I0315 07:38:20.917741   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.crt: {Name:mk7e8eb6efc2dc56995e961a3c72a8dcf3740498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:20.917912   62652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.key ...
	I0315 07:38:20.917924   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/client.key: {Name:mk40465eda31c15c4471d96c79b9920b275dd76d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:20.917997   62652 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key.19dc69aa
	I0315 07:38:20.918016   62652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt.19dc69aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.229]
	I0315 07:38:21.059797   62652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt.19dc69aa ...
	I0315 07:38:21.059825   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt.19dc69aa: {Name:mkc4b9eaaf91a48d41b053dc106e49e4de239793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:21.059982   62652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key.19dc69aa ...
	I0315 07:38:21.059995   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key.19dc69aa: {Name:mk1085f5478c13fa9fdf3cf355fcfe25bf9e7a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:21.060060   62652 certs.go:381] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt.19dc69aa -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt
	I0315 07:38:21.060148   62652 certs.go:385] copying /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key.19dc69aa -> /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key
	I0315 07:38:21.060209   62652 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.key
	I0315 07:38:21.060229   62652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.crt with IP's: []
	I0315 07:38:21.231675   62652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.crt ...
	I0315 07:38:21.231713   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.crt: {Name:mk299722d4d2e75f5ceb21209bfbba4761bf30a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:21.231893   62652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.key ...
	I0315 07:38:21.231908   62652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.key: {Name:mk08201d761f768b06838c29631444da9c92ddbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:38:21.232108   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:38:21.232165   62652 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:38:21.232180   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:38:21.232203   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:38:21.232225   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:38:21.232254   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:38:21.232310   62652 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:38:21.232918   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:38:21.265827   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:38:21.297783   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:38:21.326781   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:38:21.358276   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:38:21.387448   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:38:21.420110   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:38:21.468819   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/newest-cni-027190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:38:21.510646   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:38:21.539457   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:38:21.569651   62652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:38:21.597364   62652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:38:21.617175   62652 ssh_runner.go:195] Run: openssl version
	I0315 07:38:21.623978   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:38:21.636661   62652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:38:21.641879   62652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:38:21.641941   62652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:38:21.648039   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:38:21.660950   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:38:21.674566   62652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:38:21.679780   62652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:38:21.679856   62652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:38:21.686425   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:38:21.699914   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:38:21.713467   62652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:38:21.719641   62652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:38:21.719723   62652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:38:21.727100   62652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:38:21.741168   62652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:38:21.746207   62652 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:38:21.746278   62652 kubeadm.go:391] StartCluster: {Name:newest-cni-027190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-027190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:38:21.746376   62652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:38:21.746431   62652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:38:21.789473   62652 cri.go:89] found id: ""
	I0315 07:38:21.789558   62652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:38:21.804700   62652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:38:21.820434   62652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:38:21.834443   62652 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:38:21.834466   62652 kubeadm.go:156] found existing configuration files:
	
	I0315 07:38:21.834518   62652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:38:21.848302   62652 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:38:21.848360   62652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:38:21.861468   62652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:38:21.872934   62652 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:38:21.873008   62652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:38:21.885025   62652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:38:21.896387   62652 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:38:21.896457   62652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:38:21.907874   62652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:38:21.918957   62652 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:38:21.919043   62652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:38:21.930052   62652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:38:22.058388   62652 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0315 07:38:22.058492   62652 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:38:22.213377   62652 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:38:22.213543   62652 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:38:22.213714   62652 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:38:22.493957   62652 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:38:22.668346   62652 out.go:204]   - Generating certificates and keys ...
	I0315 07:38:22.668502   62652 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:38:22.668607   62652 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:38:22.682587   62652 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:38:22.756516   62652 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:38:22.888457   62652 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:38:23.086053   62652 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:38:23.178730   62652 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:38:23.179016   62652 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-027190] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0315 07:38:23.425297   62652 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:38:23.425550   62652 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-027190] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0315 07:38:24.041155   62652 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:38:24.170413   62652 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:38:24.293696   62652 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:38:24.293984   62652 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:38:24.472863   62652 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:38:24.663253   62652 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0315 07:38:24.906141   62652 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:38:25.285236   62652 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:38:25.432638   62652 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:38:25.433416   62652 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:38:25.437980   62652 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:38:25.439954   62652 out.go:204]   - Booting up control plane ...
	I0315 07:38:25.440075   62652 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:38:25.440501   62652 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:38:25.441425   62652 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:38:25.458596   62652 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:38:25.459469   62652 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:38:25.459531   62652 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:38:25.601810   62652 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.918459762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488311918432134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb38825b-7408-4bb2-bca7-c4407af98aa1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.918982112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b58e5d8d-2a41-404b-8783-860a388f2c7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.919060731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b58e5d8d-2a41-404b-8783-860a388f2c7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.919326831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b58e5d8d-2a41-404b-8783-860a388f2c7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.964459192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2093b574-59b6-4d47-8af4-68005f28f9b2 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.964564747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2093b574-59b6-4d47-8af4-68005f28f9b2 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.966051854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b53b455b-c65b-48b6-88ea-f3f6e45ea637 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.966622081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488311966595829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b53b455b-c65b-48b6-88ea-f3f6e45ea637 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.967492980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39b5f9f2-123f-4b44-bc9f-085bd8e08314 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.967570071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39b5f9f2-123f-4b44-bc9f-085bd8e08314 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:31 embed-certs-709708 crio[696]: time="2024-03-15 07:38:31.967768442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39b5f9f2-123f-4b44-bc9f-085bd8e08314 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.013692120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a9263aa-72e6-40d1-a3da-9200c2193791 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.013992126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a9263aa-72e6-40d1-a3da-9200c2193791 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.015022649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abbb9071-ed43-4aff-bbfd-8d829fe3962f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.015615979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488312015591524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abbb9071-ed43-4aff-bbfd-8d829fe3962f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.016295333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ed0fb1e-b902-487b-b054-b895a52fa9cc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.016372901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ed0fb1e-b902-487b-b054-b895a52fa9cc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.016628585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ed0fb1e-b902-487b-b054-b895a52fa9cc name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.058663713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=220757e6-73b2-4a6b-b248-098b00127719 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.058995984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=220757e6-73b2-4a6b-b248-098b00127719 name=/runtime.v1.RuntimeService/Version
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.061042905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eccb9cda-cbd3-403f-a783-022cd75a14a2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.061755327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488312061727046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eccb9cda-cbd3-403f-a783-022cd75a14a2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.062606696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=928250d0-3576-4065-9aeb-b2c2fd169d43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.062661594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=928250d0-3576-4065-9aeb-b2c2fd169d43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:38:32 embed-certs-709708 crio[696]: time="2024-03-15 07:38:32.062964211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d,PodSandboxId:f513769166160df27ac88581f58335a7b60ad8b56942d43e66793650046c529b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710487465734353530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38dfadf5-dc6c-48b2-939d-6ba5b1639ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 241efe6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d,PodSandboxId:d1cdc3c84d40d1711ea80d410ca91ff7da9d174cbfd8bcfad489a0952e0dc91a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487464049248121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v2mxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feedfa3b-a7de-471c-9b53-7b1eda6279dc,},Annotations:map[string]string{io.kubernetes.container.hash: 346c0bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f,PodSandboxId:39ed58a89dfb1a6c3a722d83b2c8159c8f746c09a12ac9a9bf226abc063bc1b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710487463952533844,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqjfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
49c60e3-dd25-4fb5-8172-bb3e916b619f,},Annotations:map[string]string{io.kubernetes.container.hash: 988c37ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad,PodSandboxId:27762be8106fee535b4f9c762283f2ee0876cdf0751830e32e775128afe67fd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710487463517003741,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8pd5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c8415c-ce4b-48ce-be7f-f9a313a1f969,},Annotations:map[string]string{io.kubernetes.container.hash: d7614d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8,PodSandboxId:42fb0f69a7ceaeb09d43e958346501bd403e8d297992834bda94696462135022,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710487443324036048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6c3d2a84beed5a4dffe002102c2c581,},Annotations:map[string]string{io.kubernetes.container.hash: 8626d4a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1,PodSandboxId:3f3f00693e407849343158dab552fba1c2ac190a8ba76aedec56c46162372fcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710487443236921114,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d620ee2d51df7a931d0a439dbc3efc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151,PodSandboxId:a45ace10d036d8bfa9b4815c9cd602c7acf1006a80a8a41d23141e5b1791ec83,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710487443186995300,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d4903a9732534b1ddc8539a59de5bf0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa,PodSandboxId:a358876953f3986535a8d00b18ceffc58a3c9d039ed94eed8955167272629fab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710487443229708798,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed,PodSandboxId:85326a54746b29e6cbb9fd45c9c6a2a8cc855f38a0ba3a297e1cea643e00d1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710487152811674360,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709708,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86fdc8a145f53652f374da13812e6e14,},Annotations:map[string]string{io.kubernetes.container.hash: 59f1a38a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=928250d0-3576-4065-9aeb-b2c2fd169d43 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb666f4e5a048       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   f513769166160       storage-provisioner
	8c49534c347a8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   d1cdc3c84d40d       coredns-5dd5756b68-v2mxd
	cbd6b7eb2be22       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   39ed58a89dfb1       coredns-5dd5756b68-pqjfs
	3d8e1cb9846bd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   27762be8106fe       kube-proxy-8pd5c
	96e34f8838447       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   42fb0f69a7cea       etcd-embed-certs-709708
	9837fe7649aee       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   3f3f00693e407       kube-controller-manager-embed-certs-709708
	60a71b54a648d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   a358876953f39       kube-apiserver-embed-certs-709708
	7ab47ef545847       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   a45ace10d036d       kube-scheduler-embed-certs-709708
	0ff98be2a427f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   19 minutes ago      Exited              kube-apiserver            1                   85326a54746b2       kube-apiserver-embed-certs-709708
	
	
	==> coredns [8c49534c347a89fb952b50feb6686e3e168860808b8001639a519daa83f3864d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [cbd6b7eb2be22dfdece08388fdb7cd19de911b2a7e80eebd865811f0ef62d84f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-709708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-709708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=embed-certs-709708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:24:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-709708
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:38:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:34:44 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:34:44 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:34:44 +0000   Fri, 15 Mar 2024 07:24:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:34:44 +0000   Fri, 15 Mar 2024 07:24:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    embed-certs-709708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 483fafe3358b4d4181da45f3abe565d9
	  System UUID:                483fafe3-358b-4d41-81da-45f3abe565d9
	  Boot ID:                    95a6e305-918f-473c-802b-7331b9cbe3c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-pqjfs                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-v2mxd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-709708                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-709708             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-709708    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8pd5c                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-709708             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-sz8z6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node embed-certs-709708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node embed-certs-709708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node embed-certs-709708 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node embed-certs-709708 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node embed-certs-709708 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-709708 event: Registered Node embed-certs-709708 in Controller
	
	
	==> dmesg <==
	[  +0.053877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042919] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.741657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.980000] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.683478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar15 07:19] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.069911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065561] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.204268] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.134165] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.285986] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +5.311504] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +0.076741] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.183433] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +5.790178] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.011429] kauditd_printk_skb: 69 callbacks suppressed
	[Mar15 07:23] kauditd_printk_skb: 3 callbacks suppressed
	[Mar15 07:24] systemd-fstab-generator[3404]: Ignoring "noauto" option for root device
	[  +4.588083] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.687372] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[ +13.392797] systemd-fstab-generator[3928]: Ignoring "noauto" option for root device
	[  +0.085473] kauditd_printk_skb: 14 callbacks suppressed
	[Mar15 07:25] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [96e34f8838447942effef8d7be83f6e48492123c8527b734a6c439771ea5bab8] <==
	{"level":"info","ts":"2024-03-15T07:24:03.753202Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-03-15T07:24:03.753364Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-03-15T07:24:03.753564Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d33e7f1dba1e46ae","initial-advertise-peer-urls":["https://192.168.39.80:2380"],"listen-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T07:24:03.75518Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T07:24:04.575457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgPreVoteResp from d33e7f1dba1e46ae at term 1"}
	{"level":"info","ts":"2024-03-15T07:24:04.575638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgVoteResp from d33e7f1dba1e46ae at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became leader at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.575714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae elected leader d33e7f1dba1e46ae at term 2"}
	{"level":"info","ts":"2024-03-15T07:24:04.57731Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:embed-certs-709708 ClientURLs:[https://192.168.39.80:2379]}","request-path":"/0/members/d33e7f1dba1e46ae/attributes","cluster-id":"e6a6fd39da75dc67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:24:04.577502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:24:04.579965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2024-03-15T07:24:04.580244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:24:04.582888Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:24:04.580274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:24:04.583394Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:24:04.580477Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599415Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:24:04.599638Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:34:04.624557Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":673}
	{"level":"info","ts":"2024-03-15T07:34:04.627316Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":673,"took":"2.346482ms","hash":3421045722}
	{"level":"info","ts":"2024-03-15T07:34:04.627383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3421045722,"revision":673,"compact-revision":-1}
	
	
	==> kernel <==
	 07:38:32 up 19 min,  0 users,  load average: 0.31, 0.24, 0.21
	Linux embed-certs-709708 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0ff98be2a427fa045d90d5dbd51d648443726470c57b3efb9c1a840cf68a21ed] <==
	W0315 07:23:58.166601       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.169008       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.285751       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.322958       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.413344       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.585456       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.687598       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.787249       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.788359       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:58.794278       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.045937       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.114631       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.162281       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.226071       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.229669       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.239224       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.260608       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.325568       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.463870       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.677773       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.692587       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.701533       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.724227       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.759358       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 07:23:59.869801       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [60a71b54a648d64fe6173ccceff31848c986fe966f6af291a7754f26c91304fa] <==
	W0315 07:34:07.196381       1 handler_proxy.go:93] no RequestInfo found in the context
	W0315 07:34:07.196413       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:34:07.196819       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:34:07.196867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0315 07:34:07.196918       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:34:07.198182       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:35:06.100076       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:35:07.197472       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:35:07.197651       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:35:07.197685       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:35:07.198724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:35:07.198779       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:35:07.198791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:36:06.100041       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:37:06.100600       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0315 07:37:07.198856       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:37:07.199053       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:37:07.199111       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0315 07:37:07.199278       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:37:07.199331       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0315 07:37:07.200709       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:38:06.099835       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [9837fe7649aeee5e0f66119edb13fdc82e45c85eca337c49f505f7ccba365db1] <==
	I0315 07:32:52.936605       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:33:22.440458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:33:22.945713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:33:52.446252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:33:52.955875       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:34:22.452691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:34:22.965672       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:34:52.459351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:34:52.973994       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:35:20.630719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="347.344µs"
	E0315 07:35:22.466726       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:35:22.983923       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0315 07:35:34.626375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="159.833µs"
	E0315 07:35:52.472483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:35:52.993561       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:22.479027       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:23.003384       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:36:52.484810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:36:53.011723       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:22.490762       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:23.019817       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:37:52.496719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:37:53.034085       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0315 07:38:22.505415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0315 07:38:23.046332       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3d8e1cb9846bde73cfe708449ec6a8379c76aca44ff4a6aa4e1306b967851bad] <==
	I0315 07:24:24.602816       1 server_others.go:69] "Using iptables proxy"
	I0315 07:24:24.622582       1 node.go:141] Successfully retrieved node IP: 192.168.39.80
	I0315 07:24:24.748610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 07:24:24.748629       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 07:24:24.754439       1 server_others.go:152] "Using iptables Proxier"
	I0315 07:24:24.755221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:24:24.758382       1 server.go:846] "Version info" version="v1.28.4"
	I0315 07:24:24.758395       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:24:24.761237       1 config.go:188] "Starting service config controller"
	I0315 07:24:24.761284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:24:24.761310       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:24:24.761317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:24:24.765721       1 config.go:315] "Starting node config controller"
	I0315 07:24:24.765733       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:24:24.862226       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:24:24.862262       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:24:24.866890       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ab47ef545847b0d0579e723d97e9fdda221cd14739383758d870b947a8a5151] <==
	W0315 07:24:07.014115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.014338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 07:24:07.029109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:24:07.029204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 07:24:07.112042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:24:07.112220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 07:24:07.286747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 07:24:07.287535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 07:24:07.361338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 07:24:07.361391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 07:24:07.389421       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:24:07.390060       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:24:07.441548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 07:24:07.441671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 07:24:07.459500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:24:07.459547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 07:24:07.469951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.470249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 07:24:07.555521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:24:07.555862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 07:24:07.568481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 07:24:07.568528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 07:24:07.573853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:24:07.573875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0315 07:24:09.800484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:36:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:36:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:36:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:36:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:36:13 embed-certs-709708 kubelet[3731]: E0315 07:36:13.607694    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:36:28 embed-certs-709708 kubelet[3731]: E0315 07:36:28.607509    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:36:40 embed-certs-709708 kubelet[3731]: E0315 07:36:40.608507    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:36:53 embed-certs-709708 kubelet[3731]: E0315 07:36:53.607263    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:37:06 embed-certs-709708 kubelet[3731]: E0315 07:37:06.607984    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:37:09 embed-certs-709708 kubelet[3731]: E0315 07:37:09.633348    3731 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:37:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:37:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:37:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:37:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:37:20 embed-certs-709708 kubelet[3731]: E0315 07:37:20.609363    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:37:33 embed-certs-709708 kubelet[3731]: E0315 07:37:33.609633    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:37:47 embed-certs-709708 kubelet[3731]: E0315 07:37:47.609906    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:38:02 embed-certs-709708 kubelet[3731]: E0315 07:38:02.609514    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:38:09 embed-certs-709708 kubelet[3731]: E0315 07:38:09.632906    3731 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 07:38:09 embed-certs-709708 kubelet[3731]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 07:38:09 embed-certs-709708 kubelet[3731]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 07:38:09 embed-certs-709708 kubelet[3731]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 07:38:09 embed-certs-709708 kubelet[3731]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 07:38:13 embed-certs-709708 kubelet[3731]: E0315 07:38:13.612359    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	Mar 15 07:38:28 embed-certs-709708 kubelet[3731]: E0315 07:38:28.607893    3731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-sz8z6" podUID="27033a6d-4694-433f-9b30-ca77087067f4"
	
	
	==> storage-provisioner [fb666f4e5a048652666e4e3af686bf2352c8c74456de3293ddbee50334775b3d] <==
	I0315 07:24:25.883491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:24:25.896982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:24:25.897201       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:24:25.906431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:24:25.906897       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd!
	I0315 07:24:25.907793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"554fdb25-4aa4-4a43-b92d-ef6385b035d4", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd became leader
	I0315 07:24:26.007696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-709708_1109b661-e5c1-44cc-be0d-12cbfdbef0fd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709708 -n embed-certs-709708
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-709708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-sz8z6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6: exit status 1 (86.11971ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-sz8z6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-709708 describe pod metrics-server-57f55c9bc5-sz8z6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (303.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0315 07:37:24.124364   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (254.745525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-981420" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-981420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-981420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.769µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-981420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (240.808141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-981420 logs -n 25: (1.553881871s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-901843 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | disable-driver-mounts-901843                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559541 ssh                                | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559541 -- sudo                         | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559541                                 | cert-options-559541          | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:08 UTC |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:08 UTC | 15 Mar 24 07:10 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709708            | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-128870  | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC | 15 Mar 24 07:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:10 UTC |                     |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-266938                              | cert-expiration-266938       | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:11 UTC |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:11 UTC | 15 Mar 24 07:13 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-981420        | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709708                 | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709708                                  | embed-certs-709708           | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-128870       | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-128870 | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:23 UTC |
	|         | default-k8s-diff-port-128870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-184055             | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC | 15 Mar 24 07:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-981420             | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC | 15 Mar 24 07:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-981420                              | old-k8s-version-981420       | jenkins | v1.32.0 | 15 Mar 24 07:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-184055                  | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-184055                                   | no-preload-184055            | jenkins | v1.32.0 | 15 Mar 24 07:16 UTC | 15 Mar 24 07:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:16:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:16:22.747762   57679 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:16:22.747880   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.747886   57679 out.go:304] Setting ErrFile to fd 2...
	I0315 07:16:22.747893   57679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:16:22.748104   57679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:16:22.748696   57679 out.go:298] Setting JSON to false
	I0315 07:16:22.749747   57679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7079,"bootTime":1710479904,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:16:22.749818   57679 start.go:139] virtualization: kvm guest
	I0315 07:16:22.752276   57679 out.go:177] * [no-preload-184055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:16:22.753687   57679 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:16:22.753772   57679 notify.go:220] Checking for updates...
	I0315 07:16:22.755126   57679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:16:22.756685   57679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:16:22.758227   57679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:16:22.759575   57679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:16:22.760832   57679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:16:22.762588   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:16:22.762962   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.763003   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.777618   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0315 07:16:22.777990   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.778468   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.778491   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.778835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.779043   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.779288   57679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:16:22.779570   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:16:22.779605   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:16:22.794186   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0315 07:16:22.794594   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:16:22.795069   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:16:22.795090   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:16:22.795418   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:16:22.795588   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:16:22.827726   57679 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 07:16:22.829213   57679 start.go:297] selected driver: kvm2
	I0315 07:16:22.829230   57679 start.go:901] validating driver "kvm2" against &{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.829371   57679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:16:22.830004   57679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.830080   57679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 07:16:22.844656   57679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 07:16:22.845038   57679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:16:22.845109   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:16:22.845124   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:16:22.845169   57679 start.go:340] cluster config:
	{Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:16:22.845291   57679 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.847149   57679 out.go:177] * Starting "no-preload-184055" primary control-plane node in "no-preload-184055" cluster
	I0315 07:16:23.844740   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:22.848324   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:16:22.848512   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:16:22.848590   57679 cache.go:107] acquiring lock: {Name:mk507a91ea82cf19891e8acd1558c032f04f34eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848622   57679 cache.go:107] acquiring lock: {Name:mk959cbd4cbef82f8e2fca10306b1414cda29a8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848632   57679 cache.go:107] acquiring lock: {Name:mk362e46f364e32161c4ebf9498517b9233bb5c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848688   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0315 07:16:22.848697   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0315 07:16:22.848698   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0315 07:16:22.848706   57679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.352µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 90.813µs
	I0315 07:16:22.848709   57679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 77.622µs
	I0315 07:16:22.848717   57679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0315 07:16:22.848718   57679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848585   57679 cache.go:107] acquiring lock: {Name:mkf8a4299d08827f7fbb58e733b5ddc041f66eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848724   57679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848750   57679 start.go:360] acquireMachinesLock for no-preload-184055: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:16:22.848727   57679 cache.go:107] acquiring lock: {Name:mk24e3bab793a5a4921104cba8dd44ccabe70234 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848782   57679 cache.go:107] acquiring lock: {Name:mk057d3af722495b3eef9be2bd397203a0cfd083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848798   57679 cache.go:107] acquiring lock: {Name:mk7569b19ccb6c8aefc671391db539f28eec7fd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848765   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0315 07:16:22.848854   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0315 07:16:22.848720   57679 cache.go:107] acquiring lock: {Name:mk61102053a95edfffd464e9d914d4a82b692d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:16:22.848866   57679 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 204.073µs
	I0315 07:16:22.848887   57679 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0315 07:16:22.848876   57679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 292.086µs
	I0315 07:16:22.848903   57679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848917   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0315 07:16:22.848918   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0315 07:16:22.848932   57679 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 185.306µs
	I0315 07:16:22.848935   57679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 217.165µs
	I0315 07:16:22.848945   57679 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0315 07:16:22.848947   57679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0315 07:16:22.848925   57679 cache.go:115] /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0315 07:16:22.848957   57679 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 307.421µs
	I0315 07:16:22.848968   57679 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0315 07:16:22.848975   57679 cache.go:87] Successfully saved all images to host disk.
	I0315 07:16:26.916770   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:32.996767   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:36.068808   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:42.148798   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:45.220730   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:51.300807   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:16:54.372738   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:00.452791   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:03.524789   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:09.604839   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:12.676877   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:18.756802   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:21.828800   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:27.908797   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:30.980761   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:37.060753   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:40.132766   56654 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.80:22: connect: no route to host
	I0315 07:17:43.137043   56818 start.go:364] duration metric: took 4m17.205331062s to acquireMachinesLock for "default-k8s-diff-port-128870"
	I0315 07:17:43.137112   56818 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:17:43.137122   56818 fix.go:54] fixHost starting: 
	I0315 07:17:43.137451   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:17:43.137480   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:17:43.152050   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0315 07:17:43.152498   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:17:43.153036   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:17:43.153061   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:17:43.153366   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:17:43.153554   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:17:43.153708   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:17:43.155484   56818 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128870: state=Stopped err=<nil>
	I0315 07:17:43.155516   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	W0315 07:17:43.155670   56818 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:17:43.157744   56818 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-128870" ...
	I0315 07:17:43.134528   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:17:43.134565   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.134898   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:17:43.134926   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:17:43.135125   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:17:43.136914   56654 machine.go:97] duration metric: took 4m37.183421114s to provisionDockerMachine
	I0315 07:17:43.136958   56654 fix.go:56] duration metric: took 4m37.359336318s for fixHost
	I0315 07:17:43.136965   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 4m37.359363399s
	W0315 07:17:43.136987   56654 start.go:713] error starting host: provision: host is not running
	W0315 07:17:43.137072   56654 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0315 07:17:43.137084   56654 start.go:728] Will try again in 5 seconds ...
	I0315 07:17:43.159200   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Start
	I0315 07:17:43.159394   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring networks are active...
	I0315 07:17:43.160333   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network default is active
	I0315 07:17:43.160729   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Ensuring network mk-default-k8s-diff-port-128870 is active
	I0315 07:17:43.161069   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Getting domain xml...
	I0315 07:17:43.161876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Creating domain...
	I0315 07:17:44.374946   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting to get IP...
	I0315 07:17:44.375866   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376513   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.376573   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.376456   57915 retry.go:31] will retry after 269.774205ms: waiting for machine to come up
	I0315 07:17:44.648239   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648702   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:44.648733   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:44.648652   57915 retry.go:31] will retry after 354.934579ms: waiting for machine to come up
	I0315 07:17:45.005335   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005784   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.005812   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.005747   57915 retry.go:31] will retry after 446.721698ms: waiting for machine to come up
	I0315 07:17:45.454406   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.454981   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.454902   57915 retry.go:31] will retry after 465.769342ms: waiting for machine to come up
	I0315 07:17:48.139356   56654 start.go:360] acquireMachinesLock for embed-certs-709708: {Name:mk586b481ecda7c454c5ea28c054514c88c0788e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 07:17:45.922612   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923047   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:45.923076   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:45.923009   57915 retry.go:31] will retry after 555.068076ms: waiting for machine to come up
	I0315 07:17:46.479757   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480197   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:46.480228   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:46.480145   57915 retry.go:31] will retry after 885.373865ms: waiting for machine to come up
	I0315 07:17:47.366769   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367165   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:47.367185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:47.367130   57915 retry.go:31] will retry after 1.126576737s: waiting for machine to come up
	I0315 07:17:48.494881   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:48.495294   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:48.495227   57915 retry.go:31] will retry after 1.061573546s: waiting for machine to come up
	I0315 07:17:49.558097   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558471   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:49.558525   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:49.558416   57915 retry.go:31] will retry after 1.177460796s: waiting for machine to come up
	I0315 07:17:50.737624   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:50.738363   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:50.738278   57915 retry.go:31] will retry after 2.141042369s: waiting for machine to come up
	I0315 07:17:52.881713   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882207   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:52.882232   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:52.882169   57915 retry.go:31] will retry after 2.560592986s: waiting for machine to come up
	I0315 07:17:55.445577   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:55.446095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:55.446040   57915 retry.go:31] will retry after 2.231357673s: waiting for machine to come up
	I0315 07:17:57.680400   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680831   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | unable to find current IP address of domain default-k8s-diff-port-128870 in network mk-default-k8s-diff-port-128870
	I0315 07:17:57.680865   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | I0315 07:17:57.680788   57915 retry.go:31] will retry after 2.83465526s: waiting for machine to come up
	I0315 07:18:00.518102   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Found IP for machine: 192.168.50.123
	I0315 07:18:00.518586   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has current primary IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.518602   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserving static IP address...
	I0315 07:18:00.519002   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.519030   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Reserved static IP address: 192.168.50.123
	I0315 07:18:00.519048   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | skip adding static IP to network mk-default-k8s-diff-port-128870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-128870", mac: "52:54:00:df:8d:7d", ip: "192.168.50.123"}
	I0315 07:18:00.519071   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Getting to WaitForSSH function...
	I0315 07:18:00.519088   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Waiting for SSH to be available...
	I0315 07:18:00.521330   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521694   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.521726   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.521813   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH client type: external
	I0315 07:18:00.521837   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa (-rw-------)
	I0315 07:18:00.521868   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:00.521884   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | About to run SSH command:
	I0315 07:18:00.521897   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | exit 0
	I0315 07:18:00.652642   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:00.652978   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetConfigRaw
	I0315 07:18:00.653626   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:00.656273   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656640   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.656672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.656917   56818 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/config.json ...
	I0315 07:18:00.657164   56818 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:00.657185   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:00.657421   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.659743   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660073   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.660095   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.660251   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.660431   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660563   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.660718   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.660892   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.661129   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.661144   56818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:00.777315   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:00.777345   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777642   56818 buildroot.go:166] provisioning hostname "default-k8s-diff-port-128870"
	I0315 07:18:00.777672   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:00.777864   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.780634   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.780990   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.781016   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.781217   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.781433   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781584   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.781778   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.781942   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.782111   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.782125   56818 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-128870 && echo "default-k8s-diff-port-128870" | sudo tee /etc/hostname
	I0315 07:18:01.853438   57277 start.go:364] duration metric: took 3m34.249816203s to acquireMachinesLock for "old-k8s-version-981420"
	I0315 07:18:01.853533   57277 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:01.853549   57277 fix.go:54] fixHost starting: 
	I0315 07:18:01.853935   57277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:01.853971   57277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:01.870552   57277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0315 07:18:01.871007   57277 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:01.871454   57277 main.go:141] libmachine: Using API Version  1
	I0315 07:18:01.871478   57277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:01.871841   57277 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:01.872032   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:01.872170   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetState
	I0315 07:18:01.873913   57277 fix.go:112] recreateIfNeeded on old-k8s-version-981420: state=Stopped err=<nil>
	I0315 07:18:01.873951   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	W0315 07:18:01.874118   57277 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:01.876547   57277 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-981420" ...
	I0315 07:18:01.878314   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .Start
	I0315 07:18:01.878500   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring networks are active...
	I0315 07:18:01.879204   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network default is active
	I0315 07:18:01.879493   57277 main.go:141] libmachine: (old-k8s-version-981420) Ensuring network mk-old-k8s-version-981420 is active
	I0315 07:18:01.879834   57277 main.go:141] libmachine: (old-k8s-version-981420) Getting domain xml...
	I0315 07:18:01.880563   57277 main.go:141] libmachine: (old-k8s-version-981420) Creating domain...
	I0315 07:18:00.912549   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-128870
	
	I0315 07:18:00.912579   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:00.915470   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915807   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:00.915833   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:00.915985   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:00.916191   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916356   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:00.916509   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:00.916685   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:00.916879   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:00.916897   56818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-128870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-128870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-128870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:01.042243   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:01.042276   56818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:01.042293   56818 buildroot.go:174] setting up certificates
	I0315 07:18:01.042301   56818 provision.go:84] configureAuth start
	I0315 07:18:01.042308   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetMachineName
	I0315 07:18:01.042559   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.045578   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.045896   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.045924   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.046054   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.048110   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048511   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.048540   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.048674   56818 provision.go:143] copyHostCerts
	I0315 07:18:01.048731   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:01.048740   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:01.048820   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:01.048937   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:01.048949   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:01.048984   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:01.049062   56818 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:01.049072   56818 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:01.049099   56818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:01.049175   56818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-128870 san=[127.0.0.1 192.168.50.123 default-k8s-diff-port-128870 localhost minikube]
	I0315 07:18:01.136583   56818 provision.go:177] copyRemoteCerts
	I0315 07:18:01.136644   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:01.136668   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.139481   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.139790   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.139820   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.140044   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.140268   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.140426   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.140605   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.227851   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:01.253073   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0315 07:18:01.277210   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:18:01.301587   56818 provision.go:87] duration metric: took 259.272369ms to configureAuth
	I0315 07:18:01.301620   56818 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:01.301833   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:18:01.301912   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.304841   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305199   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.305226   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.305420   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.305646   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.305804   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.306005   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.306194   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.306368   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.306388   56818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:01.596675   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:01.596699   56818 machine.go:97] duration metric: took 939.518015ms to provisionDockerMachine
	I0315 07:18:01.596712   56818 start.go:293] postStartSetup for "default-k8s-diff-port-128870" (driver="kvm2")
	I0315 07:18:01.596726   56818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:01.596745   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.597094   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:01.597122   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.599967   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600380   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.600408   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.600568   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.600780   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.600938   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.601114   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.692590   56818 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:01.697180   56818 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:01.697205   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:01.697267   56818 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:01.697334   56818 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:01.697417   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:01.708395   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:01.733505   56818 start.go:296] duration metric: took 136.779862ms for postStartSetup
	I0315 07:18:01.733550   56818 fix.go:56] duration metric: took 18.596427186s for fixHost
	I0315 07:18:01.733575   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.736203   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736601   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.736643   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.736829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.737040   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737204   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.737348   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.737508   56818 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:01.737706   56818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.123 22 <nil> <nil>}
	I0315 07:18:01.737725   56818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:01.853300   56818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487081.820376499
	
	I0315 07:18:01.853329   56818 fix.go:216] guest clock: 1710487081.820376499
	I0315 07:18:01.853337   56818 fix.go:229] Guest: 2024-03-15 07:18:01.820376499 +0000 UTC Remote: 2024-03-15 07:18:01.733555907 +0000 UTC m=+275.961651377 (delta=86.820592ms)
	I0315 07:18:01.853356   56818 fix.go:200] guest clock delta is within tolerance: 86.820592ms
	I0315 07:18:01.853360   56818 start.go:83] releasing machines lock for "default-k8s-diff-port-128870", held for 18.71627589s
	I0315 07:18:01.853389   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.853693   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:01.856530   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.856917   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.856955   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.857172   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857720   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857906   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:18:01.857978   56818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:01.858021   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.858098   56818 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:01.858116   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:18:01.860829   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861288   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861332   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861449   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861520   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:01.861549   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:01.861611   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861714   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:18:01.861800   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.861880   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:18:01.861922   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.862155   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:18:01.862337   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:18:01.989393   56818 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:01.995779   56818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:02.145420   56818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:02.151809   56818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:02.151877   56818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:02.168805   56818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:02.168834   56818 start.go:494] detecting cgroup driver to use...
	I0315 07:18:02.168891   56818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:02.185087   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:02.200433   56818 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:02.200523   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:02.214486   56818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:02.228482   56818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:02.349004   56818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:02.539676   56818 docker.go:233] disabling docker service ...
	I0315 07:18:02.539737   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:02.556994   56818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:02.574602   56818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:02.739685   56818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:02.880553   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:02.895290   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:02.913745   56818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:02.913812   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.924271   56818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:02.924347   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.935118   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.945461   56818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:02.955685   56818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:02.966655   56818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:02.976166   56818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:02.976262   56818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:02.989230   56818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:02.999581   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:03.137733   56818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:03.278802   56818 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:03.278872   56818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:03.284097   56818 start.go:562] Will wait 60s for crictl version
	I0315 07:18:03.284169   56818 ssh_runner.go:195] Run: which crictl
	I0315 07:18:03.288428   56818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:03.335269   56818 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:03.335353   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.367520   56818 ssh_runner.go:195] Run: crio --version
	I0315 07:18:03.402669   56818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:18:03.404508   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetIP
	I0315 07:18:03.407352   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.407795   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:18:03.407826   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:18:03.408066   56818 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:03.412653   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:03.426044   56818 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:03.426158   56818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:18:03.426198   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:03.464362   56818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:18:03.464439   56818 ssh_runner.go:195] Run: which lz4
	I0315 07:18:03.468832   56818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:03.473378   56818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:03.473421   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:18:05.340327   56818 crio.go:444] duration metric: took 1.871524779s to copy over tarball
	I0315 07:18:05.340414   56818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:03.118539   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting to get IP...
	I0315 07:18:03.119348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.119736   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.119833   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.119731   58029 retry.go:31] will retry after 269.066084ms: waiting for machine to come up
	I0315 07:18:03.390335   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.390932   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.390971   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.390888   58029 retry.go:31] will retry after 250.971116ms: waiting for machine to come up
	I0315 07:18:03.643446   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.644091   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.644130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.644059   58029 retry.go:31] will retry after 302.823789ms: waiting for machine to come up
	I0315 07:18:03.948802   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:03.949249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:03.949275   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:03.949206   58029 retry.go:31] will retry after 416.399441ms: waiting for machine to come up
	I0315 07:18:04.366812   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.367257   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.367278   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.367211   58029 retry.go:31] will retry after 547.717235ms: waiting for machine to come up
	I0315 07:18:04.917206   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:04.917669   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:04.917706   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:04.917626   58029 retry.go:31] will retry after 863.170331ms: waiting for machine to come up
	I0315 07:18:05.782935   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:05.783237   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:05.783260   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:05.783188   58029 retry.go:31] will retry after 743.818085ms: waiting for machine to come up
	I0315 07:18:06.528158   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:06.528531   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:06.528575   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:06.528501   58029 retry.go:31] will retry after 983.251532ms: waiting for machine to come up
	I0315 07:18:07.911380   56818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570937289s)
	I0315 07:18:07.911408   56818 crio.go:451] duration metric: took 2.571054946s to extract the tarball
	I0315 07:18:07.911434   56818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:07.953885   56818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:08.001441   56818 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:18:08.001462   56818 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:18:08.001470   56818 kubeadm.go:928] updating node { 192.168.50.123 8444 v1.28.4 crio true true} ...
	I0315 07:18:08.001595   56818 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-128870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:08.001672   56818 ssh_runner.go:195] Run: crio config
	I0315 07:18:08.049539   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:08.049563   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:08.049580   56818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:08.049598   56818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-128870 NodeName:default-k8s-diff-port-128870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:18:08.049733   56818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-128870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:08.049790   56818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:18:08.060891   56818 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:08.060953   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:08.071545   56818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0315 07:18:08.089766   56818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:08.110686   56818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:18:08.130762   56818 ssh_runner.go:195] Run: grep 192.168.50.123	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:08.135240   56818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:08.150970   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:08.301047   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:08.325148   56818 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870 for IP: 192.168.50.123
	I0315 07:18:08.325172   56818 certs.go:194] generating shared ca certs ...
	I0315 07:18:08.325194   56818 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:08.325357   56818 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:08.325407   56818 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:08.325418   56818 certs.go:256] generating profile certs ...
	I0315 07:18:08.325501   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.key
	I0315 07:18:08.325584   56818 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key.40c56f57
	I0315 07:18:08.325644   56818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key
	I0315 07:18:08.325825   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:08.325876   56818 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:08.325891   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:08.325923   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:08.325957   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:08.325987   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:08.326045   56818 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:08.326886   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:08.381881   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:08.418507   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:08.454165   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:08.489002   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 07:18:08.522971   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:18:08.559799   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:08.587561   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:08.614284   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:08.640590   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:08.668101   56818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:08.695119   56818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:08.715655   56818 ssh_runner.go:195] Run: openssl version
	I0315 07:18:08.722291   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:08.735528   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741324   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.741382   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:08.747643   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:08.760304   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:08.772446   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777368   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.777415   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:08.783346   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:08.795038   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:08.807132   56818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812420   56818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.812491   56818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:08.818902   56818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:08.831175   56818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:08.836679   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:08.844206   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:08.851170   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:08.858075   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:08.864864   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:08.871604   56818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:08.878842   56818 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-128870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-128870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:08.878928   56818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:08.878978   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:08.919924   56818 cri.go:89] found id: ""
	I0315 07:18:08.920004   56818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:08.931098   56818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:08.931118   56818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:08.931125   56818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:08.931178   56818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:08.942020   56818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:08.943535   56818 kubeconfig.go:125] found "default-k8s-diff-port-128870" server: "https://192.168.50.123:8444"
	I0315 07:18:08.946705   56818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:08.957298   56818 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.123
	I0315 07:18:08.957335   56818 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:08.957345   56818 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:08.957387   56818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:09.013878   56818 cri.go:89] found id: ""
	I0315 07:18:09.013957   56818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:09.032358   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:09.042788   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:09.042808   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:09.042848   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:18:09.052873   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:09.052944   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:09.063513   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:18:09.073685   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:09.073759   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:09.084114   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.094253   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:09.094309   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:09.104998   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:18:09.115196   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:09.115266   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:09.125815   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:09.137362   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:09.256236   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.053463   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.301226   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.399331   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:10.478177   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:10.478281   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:07.513017   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:07.513524   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:07.513554   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:07.513467   58029 retry.go:31] will retry after 1.359158137s: waiting for machine to come up
	I0315 07:18:08.873813   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:08.874310   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:08.874341   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:08.874261   58029 retry.go:31] will retry after 2.161372273s: waiting for machine to come up
	I0315 07:18:11.038353   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:11.038886   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:11.038920   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:11.038838   58029 retry.go:31] will retry after 2.203593556s: waiting for machine to come up
	I0315 07:18:10.979153   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.478594   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.978397   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:11.994277   56818 api_server.go:72] duration metric: took 1.516099635s to wait for apiserver process to appear ...
	I0315 07:18:11.994308   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:18:11.994332   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.438244   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.438283   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.438302   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.495600   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:18:14.495629   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:18:14.495644   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.530375   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.530415   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:14.994847   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:14.999891   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:14.999927   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.494442   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.503809   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:18:15.503852   56818 api_server.go:103] status: https://192.168.50.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:18:15.994709   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:18:15.999578   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:18:16.007514   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:18:16.007543   56818 api_server.go:131] duration metric: took 4.013228591s to wait for apiserver health ...
	I0315 07:18:16.007552   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:18:16.007559   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:16.009559   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:18:13.243905   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:13.244348   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:13.244376   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:13.244303   58029 retry.go:31] will retry after 2.557006754s: waiting for machine to come up
	I0315 07:18:15.804150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:15.804682   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:15.804714   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:15.804633   58029 retry.go:31] will retry after 2.99657069s: waiting for machine to come up
	I0315 07:18:16.011125   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:18:16.022725   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:18:16.042783   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:18:16.052015   56818 system_pods.go:59] 8 kube-system pods found
	I0315 07:18:16.052050   56818 system_pods.go:61] "coredns-5dd5756b68-zqq5q" [7fd106ae-d6d7-4a76-b9d1-ed669219d5da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:18:16.052060   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7f4f4018-4c18-417b-a996-db61c940acb8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:18:16.052068   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [d9f031e7-70e6-4545-bd6e-43465410f8a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:18:16.052077   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [02dbb621-1d75-4d98-9e28-8627f0d5ad36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:18:16.052085   56818 system_pods.go:61] "kube-proxy-xbpnr" [d190f804-e6b3-4d64-83f3-5917ffb8633b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:18:16.052095   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [0359b608-1763-4401-a124-b5b8a940753c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:18:16.052105   56818 system_pods.go:61] "metrics-server-57f55c9bc5-bhbwz" [07aae575-8c93-4562-81ad-f25cad06c2fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:18:16.052113   56818 system_pods.go:61] "storage-provisioner" [3ff37b98-66c1-4178-b592-09091c6c9a02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:18:16.052121   56818 system_pods.go:74] duration metric: took 9.315767ms to wait for pod list to return data ...
	I0315 07:18:16.052133   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:18:16.055632   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:18:16.055664   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:18:16.055679   56818 node_conditions.go:105] duration metric: took 3.537286ms to run NodePressure ...
	I0315 07:18:16.055711   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:16.258790   56818 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266075   56818 kubeadm.go:733] kubelet initialised
	I0315 07:18:16.266103   56818 kubeadm.go:734] duration metric: took 7.280848ms waiting for restarted kubelet to initialise ...
	I0315 07:18:16.266113   56818 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:18:16.276258   56818 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:18.282869   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:20.283261   56818 pod_ready.go:102] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:18.802516   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:18.803109   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | unable to find current IP address of domain old-k8s-version-981420 in network mk-old-k8s-version-981420
	I0315 07:18:18.803130   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | I0315 07:18:18.803071   58029 retry.go:31] will retry after 3.946687738s: waiting for machine to come up
	I0315 07:18:24.193713   57679 start.go:364] duration metric: took 2m1.344925471s to acquireMachinesLock for "no-preload-184055"
	I0315 07:18:24.193786   57679 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:24.193808   57679 fix.go:54] fixHost starting: 
	I0315 07:18:24.194218   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:24.194260   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:24.213894   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0315 07:18:24.214328   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:24.214897   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:18:24.214928   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:24.215311   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:24.215526   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:24.215680   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:18:24.217463   57679 fix.go:112] recreateIfNeeded on no-preload-184055: state=Stopped err=<nil>
	I0315 07:18:24.217501   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	W0315 07:18:24.217683   57679 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:24.219758   57679 out.go:177] * Restarting existing kvm2 VM for "no-preload-184055" ...
	I0315 07:18:22.751056   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751524   57277 main.go:141] libmachine: (old-k8s-version-981420) Found IP for machine: 192.168.61.243
	I0315 07:18:22.751545   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserving static IP address...
	I0315 07:18:22.751562   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has current primary IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.751997   57277 main.go:141] libmachine: (old-k8s-version-981420) Reserved static IP address: 192.168.61.243
	I0315 07:18:22.752034   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.752051   57277 main.go:141] libmachine: (old-k8s-version-981420) Waiting for SSH to be available...
	I0315 07:18:22.752084   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | skip adding static IP to network mk-old-k8s-version-981420 - found existing host DHCP lease matching {name: "old-k8s-version-981420", mac: "52:54:00:dd:a4:42", ip: "192.168.61.243"}
	I0315 07:18:22.752094   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Getting to WaitForSSH function...
	I0315 07:18:22.754436   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754750   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.754779   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.754888   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH client type: external
	I0315 07:18:22.754909   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa (-rw-------)
	I0315 07:18:22.754952   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:22.754972   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | About to run SSH command:
	I0315 07:18:22.754994   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | exit 0
	I0315 07:18:22.880850   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:22.881260   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetConfigRaw
	I0315 07:18:22.881879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:22.884443   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.884840   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.884873   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.885096   57277 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/config.json ...
	I0315 07:18:22.885321   57277 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:22.885341   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:22.885583   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:22.887671   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888018   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:22.888047   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:22.888194   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:22.888391   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888554   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:22.888704   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:22.888862   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:22.889047   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:22.889058   57277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:22.997017   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:22.997057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997354   57277 buildroot.go:166] provisioning hostname "old-k8s-version-981420"
	I0315 07:18:22.997385   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:22.997578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.000364   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000762   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.000785   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.000979   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.001206   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001382   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.001524   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.001695   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.001857   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.001869   57277 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-981420 && echo "old-k8s-version-981420" | sudo tee /etc/hostname
	I0315 07:18:23.122887   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-981420
	
	I0315 07:18:23.122915   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.125645   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126007   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.126040   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.126189   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.126403   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126571   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.126750   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.126918   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.127091   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.127108   57277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-981420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-981420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-981420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:23.246529   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:23.246558   57277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:23.246598   57277 buildroot.go:174] setting up certificates
	I0315 07:18:23.246607   57277 provision.go:84] configureAuth start
	I0315 07:18:23.246624   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetMachineName
	I0315 07:18:23.246912   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:23.249803   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250192   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.250225   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.250429   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.252928   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253249   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.253281   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.253434   57277 provision.go:143] copyHostCerts
	I0315 07:18:23.253522   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:23.253535   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:23.253598   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:23.253716   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:23.253726   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:23.253756   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:23.253836   57277 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:23.253845   57277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:23.253876   57277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:23.253990   57277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-981420 san=[127.0.0.1 192.168.61.243 localhost minikube old-k8s-version-981420]
	I0315 07:18:23.489017   57277 provision.go:177] copyRemoteCerts
	I0315 07:18:23.489078   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:23.489101   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.492102   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492444   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.492492   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.492670   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.492879   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.493050   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.493193   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:23.580449   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:23.607855   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:18:23.635886   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:23.661959   57277 provision.go:87] duration metric: took 415.335331ms to configureAuth
	I0315 07:18:23.661998   57277 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:23.662205   57277 config.go:182] Loaded profile config "old-k8s-version-981420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:18:23.662292   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.665171   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665547   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.665579   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.665734   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.665932   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666098   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.666203   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.666391   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:23.666541   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:23.666557   57277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:23.949894   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:23.949918   57277 machine.go:97] duration metric: took 1.06458208s to provisionDockerMachine
	I0315 07:18:23.949929   57277 start.go:293] postStartSetup for "old-k8s-version-981420" (driver="kvm2")
	I0315 07:18:23.949938   57277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:23.949953   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:23.950291   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:23.950324   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:23.953080   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953467   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:23.953500   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:23.953692   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:23.953897   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:23.954041   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:23.954194   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.039567   57277 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:24.044317   57277 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:24.044343   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:24.044426   57277 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:24.044556   57277 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:24.044695   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:24.054664   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:24.081071   57277 start.go:296] duration metric: took 131.131184ms for postStartSetup
	I0315 07:18:24.081111   57277 fix.go:56] duration metric: took 22.227571152s for fixHost
	I0315 07:18:24.081130   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.083907   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084296   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.084338   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.084578   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.084810   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085029   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.085241   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.085443   57277 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:24.085632   57277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0315 07:18:24.085659   57277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:24.193524   57277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487104.166175222
	
	I0315 07:18:24.193551   57277 fix.go:216] guest clock: 1710487104.166175222
	I0315 07:18:24.193560   57277 fix.go:229] Guest: 2024-03-15 07:18:24.166175222 +0000 UTC Remote: 2024-03-15 07:18:24.081115155 +0000 UTC m=+236.641049984 (delta=85.060067ms)
	I0315 07:18:24.193606   57277 fix.go:200] guest clock delta is within tolerance: 85.060067ms
	I0315 07:18:24.193611   57277 start.go:83] releasing machines lock for "old-k8s-version-981420", held for 22.340106242s
	I0315 07:18:24.193637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.193901   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:24.196723   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197115   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.197143   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.197299   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197745   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.197920   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .DriverName
	I0315 07:18:24.198006   57277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:24.198057   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.198131   57277 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:24.198158   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHHostname
	I0315 07:18:24.200724   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.200910   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201122   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201150   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201266   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201390   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:24.201411   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:24.201447   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201567   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHPort
	I0315 07:18:24.201637   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201741   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHKeyPath
	I0315 07:18:24.201950   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetSSHUsername
	I0315 07:18:24.201954   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.202107   57277 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/old-k8s-version-981420/id_rsa Username:docker}
	I0315 07:18:24.319968   57277 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:24.326999   57277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:24.478001   57277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:24.484632   57277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:24.484716   57277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:24.502367   57277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:24.502390   57277 start.go:494] detecting cgroup driver to use...
	I0315 07:18:24.502446   57277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:24.526465   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:24.541718   57277 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:24.541784   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:24.556623   57277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:24.572819   57277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:24.699044   57277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:24.874560   57277 docker.go:233] disabling docker service ...
	I0315 07:18:24.874620   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:24.893656   57277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:24.909469   57277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:25.070012   57277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:25.209429   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:25.225213   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:25.245125   57277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0315 07:18:25.245192   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.257068   57277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:25.257158   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.268039   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.278672   57277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:25.290137   57277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:25.303991   57277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:25.315319   57277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:25.315390   57277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:25.330544   57277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:25.347213   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:25.492171   57277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:25.660635   57277 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:25.660712   57277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:25.665843   57277 start.go:562] Will wait 60s for crictl version
	I0315 07:18:25.665896   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:25.669931   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:25.709660   57277 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:25.709753   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.742974   57277 ssh_runner.go:195] Run: crio --version
	I0315 07:18:25.780936   57277 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0315 07:18:21.783088   56818 pod_ready.go:92] pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:21.783123   56818 pod_ready.go:81] duration metric: took 5.506830379s for pod "coredns-5dd5756b68-zqq5q" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:21.783134   56818 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291005   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:22.291035   56818 pod_ready.go:81] duration metric: took 507.894451ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:22.291047   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:24.298357   56818 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:25.782438   57277 main.go:141] libmachine: (old-k8s-version-981420) Calling .GetIP
	I0315 07:18:25.785790   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786181   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:a4:42", ip: ""} in network mk-old-k8s-version-981420: {Iface:virbr3 ExpiryTime:2024-03-15 08:18:13 +0000 UTC Type:0 Mac:52:54:00:dd:a4:42 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:old-k8s-version-981420 Clientid:01:52:54:00:dd:a4:42}
	I0315 07:18:25.786212   57277 main.go:141] libmachine: (old-k8s-version-981420) DBG | domain old-k8s-version-981420 has defined IP address 192.168.61.243 and MAC address 52:54:00:dd:a4:42 in network mk-old-k8s-version-981420
	I0315 07:18:25.786453   57277 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:25.791449   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:25.805988   57277 kubeadm.go:877] updating cluster {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:25.806148   57277 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 07:18:25.806193   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:25.856967   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:25.857022   57277 ssh_runner.go:195] Run: which lz4
	I0315 07:18:25.861664   57277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:18:25.866378   57277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:18:25.866414   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0315 07:18:24.221291   57679 main.go:141] libmachine: (no-preload-184055) Calling .Start
	I0315 07:18:24.221488   57679 main.go:141] libmachine: (no-preload-184055) Ensuring networks are active...
	I0315 07:18:24.222263   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network default is active
	I0315 07:18:24.222611   57679 main.go:141] libmachine: (no-preload-184055) Ensuring network mk-no-preload-184055 is active
	I0315 07:18:24.222990   57679 main.go:141] libmachine: (no-preload-184055) Getting domain xml...
	I0315 07:18:24.223689   57679 main.go:141] libmachine: (no-preload-184055) Creating domain...
	I0315 07:18:25.500702   57679 main.go:141] libmachine: (no-preload-184055) Waiting to get IP...
	I0315 07:18:25.501474   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.501935   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.501984   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.501904   58203 retry.go:31] will retry after 283.512913ms: waiting for machine to come up
	I0315 07:18:25.787354   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:25.787744   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:25.787769   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:25.787703   58203 retry.go:31] will retry after 353.584983ms: waiting for machine to come up
	I0315 07:18:26.143444   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.143894   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.143934   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.143858   58203 retry.go:31] will retry after 478.019669ms: waiting for machine to come up
	I0315 07:18:26.623408   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:26.623873   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:26.623922   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:26.623783   58203 retry.go:31] will retry after 541.79472ms: waiting for machine to come up
	I0315 07:18:27.167796   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.167830   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.167859   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.167789   58203 retry.go:31] will retry after 708.085768ms: waiting for machine to come up
	I0315 07:18:26.307786   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.307816   56818 pod_ready.go:81] duration metric: took 4.016759726s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.307829   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315881   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.315909   56818 pod_ready.go:81] duration metric: took 8.071293ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.315922   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.327981   56818 pod_ready.go:92] pod "kube-proxy-xbpnr" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.328003   56818 pod_ready.go:81] duration metric: took 12.074444ms for pod "kube-proxy-xbpnr" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.328013   56818 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338573   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:18:26.338603   56818 pod_ready.go:81] duration metric: took 10.58282ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:26.338616   56818 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	I0315 07:18:28.347655   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:30.348020   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:27.836898   57277 crio.go:444] duration metric: took 1.975258534s to copy over tarball
	I0315 07:18:27.837063   57277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:18:30.976952   57277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.139847827s)
	I0315 07:18:30.976991   57277 crio.go:451] duration metric: took 3.140056739s to extract the tarball
	I0315 07:18:30.977001   57277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:18:31.029255   57277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:31.065273   57277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0315 07:18:31.065302   57277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:31.065378   57277 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.065413   57277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.065420   57277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.065454   57277 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.065415   57277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.065488   57277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.065492   57277 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0315 07:18:31.065697   57277 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.067024   57277 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0315 07:18:31.067095   57277 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:31.067101   57277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.067114   57277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.067018   57277 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.067023   57277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.067029   57277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.286248   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.299799   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0315 07:18:31.320342   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.321022   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.325969   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.335237   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.349483   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.354291   57277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0315 07:18:31.354337   57277 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.354385   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0315 07:18:31.452705   57277 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0315 07:18:31.452722   57277 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0315 07:18:31.452739   57277 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.452756   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452767   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.452655   57277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0315 07:18:31.452847   57277 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.452906   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.488036   57277 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0315 07:18:31.488095   57277 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.488142   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494217   57277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0315 07:18:31.494230   57277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0315 07:18:31.494264   57277 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.494263   57277 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.494291   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494312   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0315 07:18:31.494301   57277 ssh_runner.go:195] Run: which crictl
	I0315 07:18:31.494338   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0315 07:18:31.494367   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0315 07:18:31.494398   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0315 07:18:31.609505   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0315 07:18:31.609562   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0315 07:18:31.609584   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0315 07:18:31.609561   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0315 07:18:31.609622   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0315 07:18:31.617994   57277 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0315 07:18:31.618000   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0315 07:18:31.653635   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0315 07:18:31.664374   57277 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0315 07:18:32.378490   57277 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:27.877546   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:27.878023   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:27.878051   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:27.877969   58203 retry.go:31] will retry after 865.644485ms: waiting for machine to come up
	I0315 07:18:28.745108   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:28.745711   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:28.745735   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:28.745652   58203 retry.go:31] will retry after 750.503197ms: waiting for machine to come up
	I0315 07:18:29.498199   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:29.498735   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:29.498764   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:29.498688   58203 retry.go:31] will retry after 1.195704654s: waiting for machine to come up
	I0315 07:18:30.696233   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:30.696740   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:30.696773   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:30.696689   58203 retry.go:31] will retry after 1.299625978s: waiting for machine to come up
	I0315 07:18:31.997567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:31.998047   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:31.998077   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:31.998005   58203 retry.go:31] will retry after 2.151606718s: waiting for machine to come up
	I0315 07:18:33.018286   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:35.349278   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:32.524069   57277 cache_images.go:92] duration metric: took 1.458748863s to LoadCachedImages
	W0315 07:18:32.711009   57277 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0315 07:18:32.711037   57277 kubeadm.go:928] updating node { 192.168.61.243 8443 v1.20.0 crio true true} ...
	I0315 07:18:32.711168   57277 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-981420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:18:32.711246   57277 ssh_runner.go:195] Run: crio config
	I0315 07:18:32.773412   57277 cni.go:84] Creating CNI manager for ""
	I0315 07:18:32.773439   57277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:18:32.773454   57277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:18:32.773488   57277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-981420 NodeName:old-k8s-version-981420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:18:32.773654   57277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-981420"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:18:32.773727   57277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:18:32.787411   57277 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:18:32.787478   57277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:18:32.801173   57277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0315 07:18:32.825243   57277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:18:32.854038   57277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0315 07:18:32.884524   57277 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0315 07:18:32.890437   57277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:32.916884   57277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:33.097530   57277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:18:33.120364   57277 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420 for IP: 192.168.61.243
	I0315 07:18:33.120386   57277 certs.go:194] generating shared ca certs ...
	I0315 07:18:33.120404   57277 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.120555   57277 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:18:33.120615   57277 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:18:33.120624   57277 certs.go:256] generating profile certs ...
	I0315 07:18:33.120753   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.key
	I0315 07:18:33.120816   57277 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key.718ebbc0
	I0315 07:18:33.120867   57277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key
	I0315 07:18:33.120998   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:18:33.121029   57277 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:18:33.121036   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:18:33.121088   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:18:33.121116   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:18:33.121140   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:18:33.121188   57277 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:33.122056   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:18:33.171792   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:18:33.212115   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:18:33.242046   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:18:33.281200   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:18:33.315708   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:18:33.378513   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:18:33.435088   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:18:33.487396   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:18:33.519152   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:18:33.548232   57277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:18:33.581421   57277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:18:33.605794   57277 ssh_runner.go:195] Run: openssl version
	I0315 07:18:33.612829   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:18:33.626138   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631606   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.631674   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:18:33.638376   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:18:33.650211   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:18:33.662755   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669792   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.669859   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:18:33.676682   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:18:33.690717   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:18:33.703047   57277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708793   57277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.708878   57277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:18:33.715613   57277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:18:33.727813   57277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:18:33.733623   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:18:33.740312   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:18:33.747412   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:18:33.756417   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:18:33.763707   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:18:33.770975   57277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:18:33.778284   57277 kubeadm.go:391] StartCluster: {Name:old-k8s-version-981420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-981420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:18:33.778389   57277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:18:33.778448   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.828277   57277 cri.go:89] found id: ""
	I0315 07:18:33.828385   57277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:18:33.841183   57277 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:18:33.841207   57277 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:18:33.841213   57277 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:18:33.841268   57277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:18:33.853224   57277 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:18:33.854719   57277 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-981420" does not appear in /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:18:33.855719   57277 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-8825/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-981420" cluster setting kubeconfig missing "old-k8s-version-981420" context setting]
	I0315 07:18:33.857162   57277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:18:33.859602   57277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:18:33.874749   57277 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.243
	I0315 07:18:33.874786   57277 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:18:33.874798   57277 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:18:33.874866   57277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:18:33.918965   57277 cri.go:89] found id: ""
	I0315 07:18:33.919038   57277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:18:33.938834   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:18:33.954470   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:18:33.954492   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:18:33.954545   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:18:33.969147   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:18:33.969220   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:18:33.984777   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:18:33.997443   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:18:33.997522   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:18:34.010113   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.022551   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:18:34.022642   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:18:34.034531   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:18:34.045589   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:18:34.045668   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:18:34.057488   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:18:34.070248   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.210342   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:34.993451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.273246   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.416276   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:18:35.531172   57277 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:18:35.531349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.031414   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:36.532361   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:37.032310   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:34.151117   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:34.151599   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:34.151624   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:34.151532   58203 retry.go:31] will retry after 2.853194383s: waiting for machine to come up
	I0315 07:18:37.006506   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:37.006950   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:37.006979   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:37.006901   58203 retry.go:31] will retry after 2.326351005s: waiting for machine to come up
	I0315 07:18:37.851412   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:40.346065   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:37.531913   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.031565   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:38.531790   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.031450   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.532099   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.031753   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:40.531918   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.032346   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:41.531430   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:42.032419   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:39.334919   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:39.335355   57679 main.go:141] libmachine: (no-preload-184055) DBG | unable to find current IP address of domain no-preload-184055 in network mk-no-preload-184055
	I0315 07:18:39.335383   57679 main.go:141] libmachine: (no-preload-184055) DBG | I0315 07:18:39.335313   58203 retry.go:31] will retry after 2.973345322s: waiting for machine to come up
	I0315 07:18:42.312429   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.312914   57679 main.go:141] libmachine: (no-preload-184055) Found IP for machine: 192.168.72.106
	I0315 07:18:42.312940   57679 main.go:141] libmachine: (no-preload-184055) Reserving static IP address...
	I0315 07:18:42.312958   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has current primary IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.313289   57679 main.go:141] libmachine: (no-preload-184055) Reserved static IP address: 192.168.72.106
	I0315 07:18:42.313314   57679 main.go:141] libmachine: (no-preload-184055) Waiting for SSH to be available...
	I0315 07:18:42.313338   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.313381   57679 main.go:141] libmachine: (no-preload-184055) DBG | skip adding static IP to network mk-no-preload-184055 - found existing host DHCP lease matching {name: "no-preload-184055", mac: "52:54:00:22:f2:82", ip: "192.168.72.106"}
	I0315 07:18:42.313406   57679 main.go:141] libmachine: (no-preload-184055) DBG | Getting to WaitForSSH function...
	I0315 07:18:42.315751   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316088   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.316136   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.316247   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH client type: external
	I0315 07:18:42.316273   57679 main.go:141] libmachine: (no-preload-184055) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa (-rw-------)
	I0315 07:18:42.316305   57679 main.go:141] libmachine: (no-preload-184055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:18:42.316331   57679 main.go:141] libmachine: (no-preload-184055) DBG | About to run SSH command:
	I0315 07:18:42.316344   57679 main.go:141] libmachine: (no-preload-184055) DBG | exit 0
	I0315 07:18:42.436545   57679 main.go:141] libmachine: (no-preload-184055) DBG | SSH cmd err, output: <nil>: 
	I0315 07:18:42.436947   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetConfigRaw
	I0315 07:18:42.437579   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.440202   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440622   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.440651   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.440856   57679 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/config.json ...
	I0315 07:18:42.441089   57679 machine.go:94] provisionDockerMachine start ...
	I0315 07:18:42.441114   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:42.441359   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.443791   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444368   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.444415   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.444642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.444829   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445011   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.445236   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.445412   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.445627   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.445643   57679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:18:42.549510   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:18:42.549533   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.549792   57679 buildroot.go:166] provisioning hostname "no-preload-184055"
	I0315 07:18:42.549817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.550023   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.552998   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553437   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.553461   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.553654   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.553838   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.553958   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.554072   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.554296   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.554470   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.554482   57679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-184055 && echo "no-preload-184055" | sudo tee /etc/hostname
	I0315 07:18:42.671517   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-184055
	
	I0315 07:18:42.671548   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.674411   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.674839   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.674884   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.675000   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:42.675192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675366   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:42.675557   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:42.675729   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:42.675904   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:42.675922   57679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184055/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:18:43.725675   56654 start.go:364] duration metric: took 55.586271409s to acquireMachinesLock for "embed-certs-709708"
	I0315 07:18:43.725726   56654 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:18:43.725734   56654 fix.go:54] fixHost starting: 
	I0315 07:18:43.726121   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:18:43.726159   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:18:43.742795   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0315 07:18:43.743227   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:18:43.743723   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:18:43.743747   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:18:43.744080   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:18:43.744246   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:18:43.744402   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:18:43.745921   56654 fix.go:112] recreateIfNeeded on embed-certs-709708: state=Stopped err=<nil>
	I0315 07:18:43.745946   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	W0315 07:18:43.746098   56654 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:18:43.748171   56654 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709708" ...
	I0315 07:18:42.787149   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:18:42.787188   57679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:18:42.787226   57679 buildroot.go:174] setting up certificates
	I0315 07:18:42.787236   57679 provision.go:84] configureAuth start
	I0315 07:18:42.787249   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetMachineName
	I0315 07:18:42.787547   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:42.790147   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790479   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.790499   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.790682   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:42.792887   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793334   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:42.793368   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:42.793526   57679 provision.go:143] copyHostCerts
	I0315 07:18:42.793591   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:18:42.793605   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:18:42.793679   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:18:42.793790   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:18:42.793803   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:18:42.793832   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:18:42.793902   57679 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:18:42.793912   57679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:18:42.793941   57679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:18:42.794015   57679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.no-preload-184055 san=[127.0.0.1 192.168.72.106 localhost minikube no-preload-184055]
	I0315 07:18:43.052715   57679 provision.go:177] copyRemoteCerts
	I0315 07:18:43.052775   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:18:43.052806   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.055428   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055778   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.055807   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.055974   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.056166   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.056300   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.056421   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.135920   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:18:43.161208   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:18:43.186039   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:18:43.212452   57679 provision.go:87] duration metric: took 425.201994ms to configureAuth
	I0315 07:18:43.212496   57679 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:18:43.212719   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:18:43.212788   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.215567   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.215872   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.215909   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.216112   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.216310   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216507   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.216674   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.216835   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.217014   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.217036   57679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:18:43.491335   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:18:43.491364   57679 machine.go:97] duration metric: took 1.050260674s to provisionDockerMachine
	I0315 07:18:43.491379   57679 start.go:293] postStartSetup for "no-preload-184055" (driver="kvm2")
	I0315 07:18:43.491390   57679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:18:43.491406   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.491736   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:18:43.491783   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.494264   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494601   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.494629   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.494833   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.495010   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.495192   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.495410   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.577786   57679 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:18:43.582297   57679 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:18:43.582323   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:18:43.582385   57679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:18:43.582465   57679 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:18:43.582553   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:18:43.594809   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:18:43.619993   57679 start.go:296] duration metric: took 128.602651ms for postStartSetup
	I0315 07:18:43.620048   57679 fix.go:56] duration metric: took 19.426251693s for fixHost
	I0315 07:18:43.620080   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.622692   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623117   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.623149   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.623281   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.623488   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623642   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.623799   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.623982   57679 main.go:141] libmachine: Using SSH client type: native
	I0315 07:18:43.624164   57679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0315 07:18:43.624177   57679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:18:43.725531   57679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487123.702541727
	
	I0315 07:18:43.725554   57679 fix.go:216] guest clock: 1710487123.702541727
	I0315 07:18:43.725564   57679 fix.go:229] Guest: 2024-03-15 07:18:43.702541727 +0000 UTC Remote: 2024-03-15 07:18:43.620064146 +0000 UTC m=+140.919121145 (delta=82.477581ms)
	I0315 07:18:43.725591   57679 fix.go:200] guest clock delta is within tolerance: 82.477581ms
	I0315 07:18:43.725598   57679 start.go:83] releasing machines lock for "no-preload-184055", held for 19.531849255s
	I0315 07:18:43.725626   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.725905   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:43.728963   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729327   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.729350   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.729591   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730139   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730341   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:18:43.730429   57679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:18:43.730477   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.730605   57679 ssh_runner.go:195] Run: cat /version.json
	I0315 07:18:43.730636   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:18:43.733635   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.733858   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734036   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734065   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734230   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734350   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:43.734388   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:43.734415   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734563   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.734617   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:18:43.734718   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.734817   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:18:43.734953   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:18:43.735096   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:18:43.851940   57679 ssh_runner.go:195] Run: systemctl --version
	I0315 07:18:43.858278   57679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:18:44.011307   57679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:18:44.017812   57679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:18:44.017889   57679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:18:44.037061   57679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:18:44.037090   57679 start.go:494] detecting cgroup driver to use...
	I0315 07:18:44.037155   57679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:18:44.055570   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:18:44.072542   57679 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:18:44.072607   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:18:44.089296   57679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:18:44.106248   57679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:18:44.226582   57679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:18:44.401618   57679 docker.go:233] disabling docker service ...
	I0315 07:18:44.401684   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:18:44.422268   57679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:18:44.438917   57679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:18:44.577577   57679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:18:44.699441   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:18:44.713711   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:18:44.735529   57679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:18:44.735596   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.746730   57679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:18:44.746800   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.758387   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.769285   57679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:18:44.780431   57679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:18:44.793051   57679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:18:44.803849   57679 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:18:44.803900   57679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:18:44.817347   57679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:18:44.827617   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:18:44.955849   57679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:18:45.110339   57679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:18:45.110412   57679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:18:45.116353   57679 start.go:562] Will wait 60s for crictl version
	I0315 07:18:45.116409   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.120283   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:18:45.165223   57679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:18:45.165315   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.195501   57679 ssh_runner.go:195] Run: crio --version
	I0315 07:18:45.227145   57679 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0315 07:18:43.749495   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Start
	I0315 07:18:43.749652   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring networks are active...
	I0315 07:18:43.750367   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network default is active
	I0315 07:18:43.750674   56654 main.go:141] libmachine: (embed-certs-709708) Ensuring network mk-embed-certs-709708 is active
	I0315 07:18:43.750996   56654 main.go:141] libmachine: (embed-certs-709708) Getting domain xml...
	I0315 07:18:43.751723   56654 main.go:141] libmachine: (embed-certs-709708) Creating domain...
	I0315 07:18:45.031399   56654 main.go:141] libmachine: (embed-certs-709708) Waiting to get IP...
	I0315 07:18:45.032440   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.033100   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.033130   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.033063   58809 retry.go:31] will retry after 257.883651ms: waiting for machine to come up
	I0315 07:18:45.293330   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.293785   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.293813   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.293753   58809 retry.go:31] will retry after 291.763801ms: waiting for machine to come up
	I0315 07:18:45.587496   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:45.588018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:45.588053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:45.587966   58809 retry.go:31] will retry after 483.510292ms: waiting for machine to come up
	I0315 07:18:42.848547   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:44.848941   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:42.531616   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.031700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:43.531743   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.031720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:44.531569   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.032116   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.531415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.031885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:46.531806   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:47.031892   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:45.228809   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetIP
	I0315 07:18:45.231820   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232306   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:18:45.232346   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:18:45.232615   57679 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0315 07:18:45.237210   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:18:45.253048   57679 kubeadm.go:877] updating cluster {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:18:45.253169   57679 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 07:18:45.253211   57679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:18:45.290589   57679 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0315 07:18:45.290615   57679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0315 07:18:45.290686   57679 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.290706   57679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.290727   57679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.290780   57679 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.290709   57679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.290829   57679 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0315 07:18:45.290852   57679 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.290688   57679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.292274   57679 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:45.292279   57679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.292285   57679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.292293   57679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.292267   57679 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0315 07:18:45.292343   57679 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.292266   57679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.518045   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0315 07:18:45.518632   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.519649   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.520895   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.530903   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.534319   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.557827   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.780756   57679 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0315 07:18:45.780786   57679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0315 07:18:45.780802   57679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.780824   57679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.780839   57679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0315 07:18:45.780854   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780869   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780873   57679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.780908   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.780947   57679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0315 07:18:45.780977   57679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.780979   57679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0315 07:18:45.781013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781015   57679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.781045   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.781050   57679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0315 07:18:45.781075   57679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.781117   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:45.794313   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0315 07:18:45.794384   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0315 07:18:45.794874   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0315 07:18:45.794930   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0315 07:18:45.794998   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0315 07:18:45.795015   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0315 07:18:45.920709   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0315 07:18:45.920761   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.920818   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.920861   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:45.950555   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950668   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:45.950674   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.950770   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:45.953567   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0315 07:18:45.953680   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:45.954804   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0315 07:18:45.954820   57679 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.954834   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0315 07:18:45.954860   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0315 07:18:45.955149   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.955229   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:45.962799   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0315 07:18:45.962848   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0315 07:18:45.967323   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0315 07:18:46.131039   57679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:46.072738   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.073159   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.073194   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.073116   58809 retry.go:31] will retry after 584.886361ms: waiting for machine to come up
	I0315 07:18:46.659981   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:46.660476   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:46.660508   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:46.660422   58809 retry.go:31] will retry after 468.591357ms: waiting for machine to come up
	I0315 07:18:47.131146   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.131602   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.131632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.131545   58809 retry.go:31] will retry after 684.110385ms: waiting for machine to come up
	I0315 07:18:47.817532   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:47.818053   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:47.818077   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:47.818006   58809 retry.go:31] will retry after 1.130078362s: waiting for machine to come up
	I0315 07:18:48.950134   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:48.950609   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:48.950636   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:48.950568   58809 retry.go:31] will retry after 1.472649521s: waiting for machine to come up
	I0315 07:18:50.424362   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:50.424769   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:50.424794   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:50.424727   58809 retry.go:31] will retry after 1.764661467s: waiting for machine to come up
	I0315 07:18:46.849608   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:48.849792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:47.531684   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.031420   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:48.532078   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.031488   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:49.531894   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.031876   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.531489   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.032315   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:51.532025   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.032349   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:50.042422   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (4.087167844s)
	I0315 07:18:50.042467   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0315 07:18:50.042517   57679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.911440029s)
	I0315 07:18:50.042558   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.087678912s)
	I0315 07:18:50.042581   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0315 07:18:50.042593   57679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0315 07:18:50.042642   57679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:50.042697   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:18:50.042605   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.042792   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0315 07:18:50.047136   57679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:18:52.191251   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:52.191800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:52.191830   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:52.191752   58809 retry.go:31] will retry after 2.055252939s: waiting for machine to come up
	I0315 07:18:54.248485   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:54.248992   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:54.249033   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:54.248925   58809 retry.go:31] will retry after 2.088340673s: waiting for machine to come up
	I0315 07:18:51.354874   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:53.847178   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:52.531679   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.032198   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:53.531885   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.031838   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:54.531480   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.031478   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:55.532233   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.032219   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:56.531515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:57.031774   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.820533519s)
	I0315 07:18:52.863400   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0315 07:18:52.863425   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863473   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0315 07:18:52.863359   57679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.816192568s)
	I0315 07:18:52.863575   57679 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0315 07:18:52.863689   57679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:18:54.932033   57679 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.068320136s)
	I0315 07:18:54.932078   57679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0315 07:18:54.932035   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.068531632s)
	I0315 07:18:54.932094   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0315 07:18:54.932115   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:54.932179   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0315 07:18:56.403605   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.4713961s)
	I0315 07:18:56.403640   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0315 07:18:56.403669   57679 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.403723   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0315 07:18:56.339118   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:56.339678   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:56.339713   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:56.339608   58809 retry.go:31] will retry after 2.53383617s: waiting for machine to come up
	I0315 07:18:58.875345   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:18:58.875748   56654 main.go:141] libmachine: (embed-certs-709708) DBG | unable to find current IP address of domain embed-certs-709708 in network mk-embed-certs-709708
	I0315 07:18:58.875778   56654 main.go:141] libmachine: (embed-certs-709708) DBG | I0315 07:18:58.875687   58809 retry.go:31] will retry after 2.766026598s: waiting for machine to come up
	I0315 07:18:55.848559   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:58.345399   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:00.346417   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:18:57.532018   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.031719   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.531409   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.031439   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:59.532160   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.031544   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:00.532446   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.031969   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:01.531498   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.032043   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:18:58.381660   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.977913941s)
	I0315 07:18:58.381692   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0315 07:18:58.381718   57679 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:18:58.381768   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0315 07:19:00.755422   57679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.373623531s)
	I0315 07:19:00.755458   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0315 07:19:00.755504   57679 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:00.755568   57679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0315 07:19:01.715162   57679 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18213-8825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0315 07:19:01.715212   57679 cache_images.go:123] Successfully loaded all cached images
	I0315 07:19:01.715220   57679 cache_images.go:92] duration metric: took 16.424591777s to LoadCachedImages
	I0315 07:19:01.715233   57679 kubeadm.go:928] updating node { 192.168.72.106 8443 v1.29.0-rc.2 crio true true} ...
	I0315 07:19:01.715336   57679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:01.715399   57679 ssh_runner.go:195] Run: crio config
	I0315 07:19:01.763947   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:01.763976   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:01.763991   57679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:01.764018   57679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184055 NodeName:no-preload-184055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:01.764194   57679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-184055"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:01.764253   57679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0315 07:19:01.776000   57679 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:01.776058   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:01.788347   57679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:19:01.809031   57679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0315 07:19:01.829246   57679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0315 07:19:01.847997   57679 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:01.852108   57679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:01.866595   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:01.995786   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:02.015269   57679 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055 for IP: 192.168.72.106
	I0315 07:19:02.015290   57679 certs.go:194] generating shared ca certs ...
	I0315 07:19:02.015304   57679 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:02.015477   57679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:02.015532   57679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:02.015545   57679 certs.go:256] generating profile certs ...
	I0315 07:19:02.015661   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/client.key
	I0315 07:19:02.015743   57679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key.516a8979
	I0315 07:19:02.015809   57679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key
	I0315 07:19:02.015959   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:02.015996   57679 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:02.016007   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:02.016044   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:02.016073   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:02.016107   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:02.016159   57679 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.016801   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:02.066927   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:02.111125   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:02.143461   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:02.183373   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 07:19:02.217641   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:02.247391   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:02.272717   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/no-preload-184055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:19:02.298446   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:02.324638   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:02.353518   57679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:02.378034   57679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:02.395565   57679 ssh_runner.go:195] Run: openssl version
	I0315 07:19:02.403372   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:02.417728   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422588   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.422643   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:02.428559   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:02.440545   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:02.453137   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459691   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.459757   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:02.465928   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:02.478943   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:02.491089   57679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496135   57679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.496186   57679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:02.503214   57679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:02.515676   57679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:02.521113   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:02.527599   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:02.534311   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:02.541847   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:02.549359   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:02.556354   57679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:02.562565   57679 kubeadm.go:391] StartCluster: {Name:no-preload-184055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-184055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:02.562640   57679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:02.562681   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.603902   57679 cri.go:89] found id: ""
	I0315 07:19:02.603960   57679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:02.615255   57679 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:02.615278   57679 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:02.615285   57679 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:02.615352   57679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:02.625794   57679 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:02.626903   57679 kubeconfig.go:125] found "no-preload-184055" server: "https://192.168.72.106:8443"
	I0315 07:19:02.629255   57679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:02.641085   57679 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.106
	I0315 07:19:02.641119   57679 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:02.641131   57679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:02.641184   57679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:02.679902   57679 cri.go:89] found id: ""
	I0315 07:19:02.679972   57679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:02.698555   57679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:02.710436   57679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:02.710457   57679 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:02.710510   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:02.721210   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:02.721287   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:02.732445   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:02.742547   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:02.742609   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:01.644448   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644903   56654 main.go:141] libmachine: (embed-certs-709708) Found IP for machine: 192.168.39.80
	I0315 07:19:01.644933   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has current primary IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.644941   56654 main.go:141] libmachine: (embed-certs-709708) Reserving static IP address...
	I0315 07:19:01.645295   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.645341   56654 main.go:141] libmachine: (embed-certs-709708) DBG | skip adding static IP to network mk-embed-certs-709708 - found existing host DHCP lease matching {name: "embed-certs-709708", mac: "52:54:00:46:25:ab", ip: "192.168.39.80"}
	I0315 07:19:01.645353   56654 main.go:141] libmachine: (embed-certs-709708) Reserved static IP address: 192.168.39.80
	I0315 07:19:01.645368   56654 main.go:141] libmachine: (embed-certs-709708) Waiting for SSH to be available...
	I0315 07:19:01.645381   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Getting to WaitForSSH function...
	I0315 07:19:01.647351   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647603   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.647632   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.647729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH client type: external
	I0315 07:19:01.647757   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Using SSH private key: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa (-rw-------)
	I0315 07:19:01.647800   56654 main.go:141] libmachine: (embed-certs-709708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 07:19:01.647815   56654 main.go:141] libmachine: (embed-certs-709708) DBG | About to run SSH command:
	I0315 07:19:01.647826   56654 main.go:141] libmachine: (embed-certs-709708) DBG | exit 0
	I0315 07:19:01.773018   56654 main.go:141] libmachine: (embed-certs-709708) DBG | SSH cmd err, output: <nil>: 
	I0315 07:19:01.773396   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetConfigRaw
	I0315 07:19:01.774063   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:01.776972   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777394   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.777420   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.777722   56654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/config.json ...
	I0315 07:19:01.777993   56654 machine.go:94] provisionDockerMachine start ...
	I0315 07:19:01.778019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:01.778240   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.780561   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.780899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.780929   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.781048   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.781269   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781424   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.781567   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.781709   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.781926   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.781949   56654 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:19:01.892885   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0315 07:19:01.892917   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893185   56654 buildroot.go:166] provisioning hostname "embed-certs-709708"
	I0315 07:19:01.893216   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:01.893419   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:01.896253   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896703   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:01.896731   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:01.896903   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:01.897096   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897252   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:01.897404   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:01.897572   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:01.897805   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:01.897824   56654 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709708 && echo "embed-certs-709708" | sudo tee /etc/hostname
	I0315 07:19:02.025278   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709708
	
	I0315 07:19:02.025326   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.028366   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.028884   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.028909   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.029101   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.029305   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029494   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.029627   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.029777   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.030554   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.030579   56654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709708/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:19:02.156712   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:19:02.156746   56654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18213-8825/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-8825/.minikube}
	I0315 07:19:02.156775   56654 buildroot.go:174] setting up certificates
	I0315 07:19:02.156789   56654 provision.go:84] configureAuth start
	I0315 07:19:02.156801   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetMachineName
	I0315 07:19:02.157127   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:02.160085   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160505   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.160537   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.160708   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.163257   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163629   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.163657   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.163889   56654 provision.go:143] copyHostCerts
	I0315 07:19:02.163968   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem, removing ...
	I0315 07:19:02.163980   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem
	I0315 07:19:02.164056   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/cert.pem (1123 bytes)
	I0315 07:19:02.164175   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem, removing ...
	I0315 07:19:02.164187   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem
	I0315 07:19:02.164223   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/key.pem (1679 bytes)
	I0315 07:19:02.164300   56654 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem, removing ...
	I0315 07:19:02.164309   56654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem
	I0315 07:19:02.164336   56654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-8825/.minikube/ca.pem (1078 bytes)
	I0315 07:19:02.164406   56654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709708 san=[127.0.0.1 192.168.39.80 embed-certs-709708 localhost minikube]
	I0315 07:19:02.316612   56654 provision.go:177] copyRemoteCerts
	I0315 07:19:02.316682   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:19:02.316714   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.319348   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319698   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.319729   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.319905   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.320124   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.320349   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.320522   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.411010   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:19:02.437482   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0315 07:19:02.467213   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:19:02.496830   56654 provision.go:87] duration metric: took 340.029986ms to configureAuth
	I0315 07:19:02.496859   56654 buildroot.go:189] setting minikube options for container-runtime
	I0315 07:19:02.497087   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:19:02.497183   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.499512   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.499856   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.499890   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.500013   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.500239   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500426   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.500590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.500747   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.500915   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.500930   56654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 07:19:02.789073   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 07:19:02.789096   56654 machine.go:97] duration metric: took 1.011084447s to provisionDockerMachine
	I0315 07:19:02.789109   56654 start.go:293] postStartSetup for "embed-certs-709708" (driver="kvm2")
	I0315 07:19:02.789125   56654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:19:02.789149   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:02.789460   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:19:02.789491   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.792272   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792606   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.792630   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.792803   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.792994   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.793133   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.793312   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:02.884917   56654 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:19:02.889393   56654 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 07:19:02.889415   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/addons for local assets ...
	I0315 07:19:02.889484   56654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-8825/.minikube/files for local assets ...
	I0315 07:19:02.889556   56654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem -> 160752.pem in /etc/ssl/certs
	I0315 07:19:02.889647   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:19:02.899340   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:02.931264   56654 start.go:296] duration metric: took 142.140002ms for postStartSetup
	I0315 07:19:02.931306   56654 fix.go:56] duration metric: took 19.205570424s for fixHost
	I0315 07:19:02.931330   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:02.933874   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934239   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:02.934275   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:02.934397   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:02.934590   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934759   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:02.934909   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:02.935069   56654 main.go:141] libmachine: Using SSH client type: native
	I0315 07:19:02.935237   56654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0315 07:19:02.935247   56654 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 07:19:03.049824   56654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710487143.028884522
	
	I0315 07:19:03.049866   56654 fix.go:216] guest clock: 1710487143.028884522
	I0315 07:19:03.049878   56654 fix.go:229] Guest: 2024-03-15 07:19:03.028884522 +0000 UTC Remote: 2024-03-15 07:19:02.931311827 +0000 UTC m=+357.383146796 (delta=97.572695ms)
	I0315 07:19:03.049914   56654 fix.go:200] guest clock delta is within tolerance: 97.572695ms
	I0315 07:19:03.049924   56654 start.go:83] releasing machines lock for "embed-certs-709708", held for 19.324213931s
	I0315 07:19:03.049953   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.050234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:03.053446   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.053863   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.053894   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.054019   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054512   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054716   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:19:03.054809   56654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:19:03.054863   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.054977   56654 ssh_runner.go:195] Run: cat /version.json
	I0315 07:19:03.055002   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:19:03.057529   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057725   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.057968   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.057990   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058136   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:03.058231   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:03.058309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058382   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:19:03.058453   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.058558   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:19:03.058600   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.059301   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:19:03.059489   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:19:03.133916   56654 ssh_runner.go:195] Run: systemctl --version
	I0315 07:19:03.172598   56654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 07:19:03.315582   56654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 07:19:03.322748   56654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 07:19:03.322817   56654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:19:03.344389   56654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 07:19:03.344414   56654 start.go:494] detecting cgroup driver to use...
	I0315 07:19:03.344497   56654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 07:19:03.361916   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 07:19:03.380117   56654 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:19:03.380188   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:19:03.398778   56654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:19:03.413741   56654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:19:03.555381   56654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:19:03.723615   56654 docker.go:233] disabling docker service ...
	I0315 07:19:03.723691   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:19:03.739459   56654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:19:03.753608   56654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:19:03.888720   56654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:19:04.010394   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:19:04.025664   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:19:04.050174   56654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 07:19:04.050318   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.063660   56654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 07:19:04.063722   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.076671   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.089513   56654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 07:19:04.106303   56654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:19:04.122411   56654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:19:04.134737   56654 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 07:19:04.134807   56654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 07:19:04.150141   56654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:19:04.162112   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:04.299414   56654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 07:19:04.465059   56654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 07:19:04.465173   56654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 07:19:04.470686   56654 start.go:562] Will wait 60s for crictl version
	I0315 07:19:04.470752   56654 ssh_runner.go:195] Run: which crictl
	I0315 07:19:04.476514   56654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:19:04.526512   56654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 07:19:04.526580   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.562374   56654 ssh_runner.go:195] Run: crio --version
	I0315 07:19:04.600662   56654 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 07:19:04.602093   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetIP
	I0315 07:19:04.604915   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605315   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:19:04.605342   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:19:04.605611   56654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 07:19:04.610239   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:04.624434   56654 kubeadm.go:877] updating cluster {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:19:04.624623   56654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 07:19:04.624694   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:04.662547   56654 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 07:19:04.662608   56654 ssh_runner.go:195] Run: which lz4
	I0315 07:19:04.666936   56654 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 07:19:04.671280   56654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 07:19:04.671304   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 07:19:02.846643   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:05.355930   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:02.532265   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.032220   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:03.532187   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.031557   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:04.531581   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.031391   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.532335   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.031540   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:07.032431   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:02.753675   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.767447   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:02.767534   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:02.777970   57679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:02.788924   57679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:02.789016   57679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:02.801339   57679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:02.812430   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:02.920559   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.085253   57679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.164653996s)
	I0315 07:19:04.085292   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.318428   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.397758   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:04.499346   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:04.499446   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.000270   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:05.500541   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.000355   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:06.016461   57679 api_server.go:72] duration metric: took 1.5171127s to wait for apiserver process to appear ...
	I0315 07:19:06.016508   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:06.016531   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:06.535097   56654 crio.go:444] duration metric: took 1.868182382s to copy over tarball
	I0315 07:19:06.535191   56654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 07:19:09.200607   56654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.665377153s)
	I0315 07:19:09.200642   56654 crio.go:451] duration metric: took 2.665517622s to extract the tarball
	I0315 07:19:09.200652   56654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 07:19:09.252819   56654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:19:09.304448   56654 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 07:19:09.304486   56654 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:19:09.304497   56654 kubeadm.go:928] updating node { 192.168.39.80 8443 v1.28.4 crio true true} ...
	I0315 07:19:09.304636   56654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:19:09.304720   56654 ssh_runner.go:195] Run: crio config
	I0315 07:19:09.370439   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:09.370467   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:09.370479   56654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:19:09.370507   56654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709708 NodeName:embed-certs-709708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:19:09.370686   56654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:19:09.370764   56654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:19:09.385315   56654 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:19:09.385375   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:19:09.397956   56654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:19:09.422758   56654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:19:09.448155   56654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0315 07:19:09.470929   56654 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0315 07:19:09.475228   56654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:19:09.489855   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:09.631284   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:09.653665   56654 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708 for IP: 192.168.39.80
	I0315 07:19:09.653690   56654 certs.go:194] generating shared ca certs ...
	I0315 07:19:09.653709   56654 certs.go:226] acquiring lock for ca certs: {Name:mk21107528582bd739e76442bcf4baaf7000ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:09.653899   56654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key
	I0315 07:19:09.653962   56654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key
	I0315 07:19:09.653979   56654 certs.go:256] generating profile certs ...
	I0315 07:19:09.654078   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/client.key
	I0315 07:19:09.763493   56654 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key.f0929e29
	I0315 07:19:09.763624   56654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key
	I0315 07:19:09.763771   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem (1338 bytes)
	W0315 07:19:09.763811   56654 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075_empty.pem, impossibly tiny 0 bytes
	I0315 07:19:09.763827   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca-key.pem (1679 bytes)
	I0315 07:19:09.763864   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:19:09.763897   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:19:09.763928   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/certs/key.pem (1679 bytes)
	I0315 07:19:09.763982   56654 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem (1708 bytes)
	I0315 07:19:09.764776   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:19:09.806388   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 07:19:09.843162   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:19:09.870547   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:19:09.897155   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:19:09.924274   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:19:09.949996   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:19:09.978672   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/embed-certs-709708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:19:10.011387   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/ssl/certs/160752.pem --> /usr/share/ca-certificates/160752.pem (1708 bytes)
	I0315 07:19:10.041423   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:19:10.070040   56654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-8825/.minikube/certs/16075.pem --> /usr/share/ca-certificates/16075.pem (1338 bytes)
	I0315 07:19:10.096972   56654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:19:10.115922   56654 ssh_runner.go:195] Run: openssl version
	I0315 07:19:10.122615   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160752.pem && ln -fs /usr/share/ca-certificates/160752.pem /etc/ssl/certs/160752.pem"
	I0315 07:19:10.136857   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142233   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 06:06 /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.142297   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160752.pem
	I0315 07:19:10.148674   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160752.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:19:10.161274   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:19:10.174209   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179218   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 05:58 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.179282   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:19:10.185684   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:19:10.197793   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16075.pem && ln -fs /usr/share/ca-certificates/16075.pem /etc/ssl/certs/16075.pem"
	I0315 07:19:10.211668   56654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216821   56654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 06:06 /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.216872   56654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16075.pem
	I0315 07:19:10.223072   56654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16075.pem /etc/ssl/certs/51391683.0"
	I0315 07:19:10.242373   56654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:19:10.248851   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:19:10.257745   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:19:10.265957   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:19:10.273686   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:19:10.280191   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:19:10.286658   56654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:19:10.294418   56654 kubeadm.go:391] StartCluster: {Name:embed-certs-709708 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-709708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:19:10.294535   56654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 07:19:10.294615   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.334788   56654 cri.go:89] found id: ""
	I0315 07:19:10.334853   56654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:19:10.347183   56654 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:19:10.347204   56654 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:19:10.347213   56654 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:19:10.347267   56654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:19:10.358831   56654 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:19:10.360243   56654 kubeconfig.go:125] found "embed-certs-709708" server: "https://192.168.39.80:8443"
	I0315 07:19:10.363052   56654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:19:10.374793   56654 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.80
	I0315 07:19:10.374836   56654 kubeadm.go:1154] stopping kube-system containers ...
	I0315 07:19:10.374850   56654 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0315 07:19:10.374901   56654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:19:10.425329   56654 cri.go:89] found id: ""
	I0315 07:19:10.425422   56654 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 07:19:10.456899   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:19:10.471446   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:19:10.471472   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:19:10.471524   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:19:10.482594   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:19:10.482665   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:19:10.494222   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:19:10.507532   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:19:10.507603   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:19:10.521573   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.532936   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:19:10.532987   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:19:10.545424   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:19:10.555920   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:19:10.555997   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:19:10.566512   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:19:10.580296   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:07.850974   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:10.345768   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:07.531735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.032448   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:08.532044   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.031392   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.532117   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.032364   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:10.532425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.032181   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:11.532189   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.031783   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:09.220113   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.220149   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.220168   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.257960   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:09.257989   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:09.517513   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:09.522510   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:09.522543   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.016844   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.021836   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.021870   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:10.517500   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:10.522377   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:10.522408   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.016946   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:11.021447   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:11.021474   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:11.517004   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.070536   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.070591   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.070620   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.076358   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:12.076388   57679 api_server.go:103] status: https://192.168.72.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:12.517623   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:19:12.524259   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:19:12.532170   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:19:12.532204   57679 api_server.go:131] duration metric: took 6.515687483s to wait for apiserver health ...
	I0315 07:19:12.532216   57679 cni.go:84] Creating CNI manager for ""
	I0315 07:19:12.532225   57679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:12.534306   57679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:12.535843   57679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:12.551438   57679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:12.572988   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:12.585732   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:12.585765   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:12.585771   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:12.585785   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:12.585793   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:12.585801   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:12.585812   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:12.585823   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:12.585830   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:12.585846   57679 system_pods.go:74] duration metric: took 12.835431ms to wait for pod list to return data ...
	I0315 07:19:12.585855   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:12.590205   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:12.590231   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:12.590240   57679 node_conditions.go:105] duration metric: took 4.37876ms to run NodePressure ...
	I0315 07:19:12.590256   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.882838   57679 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887944   57679 kubeadm.go:733] kubelet initialised
	I0315 07:19:12.887965   57679 kubeadm.go:734] duration metric: took 5.101964ms waiting for restarted kubelet to initialise ...
	I0315 07:19:12.887972   57679 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:12.900170   57679 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.906934   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906958   57679 pod_ready.go:81] duration metric: took 6.75798ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.906968   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "coredns-76f75df574-tc5zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.906977   57679 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.913849   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913889   57679 pod_ready.go:81] duration metric: took 6.892698ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.913902   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "etcd-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.913911   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.919924   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919947   57679 pod_ready.go:81] duration metric: took 6.029248ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.919957   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-apiserver-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.919967   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:12.976208   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976235   57679 pod_ready.go:81] duration metric: took 56.25911ms for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:12.976247   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:12.976253   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.376616   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376646   57679 pod_ready.go:81] duration metric: took 400.379371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.376657   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-proxy-977jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.376666   57679 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:13.776577   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776600   57679 pod_ready.go:81] duration metric: took 399.92709ms for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:13.776610   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "kube-scheduler-no-preload-184055" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:13.776616   57679 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:14.177609   57679 pod_ready.go:97] node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177639   57679 pod_ready.go:81] duration metric: took 401.014243ms for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:19:14.177649   57679 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-184055" hosting pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:14.177658   57679 pod_ready.go:38] duration metric: took 1.289677234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:14.177680   57679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:19:14.190868   57679 ops.go:34] apiserver oom_adj: -16
	I0315 07:19:14.190895   57679 kubeadm.go:591] duration metric: took 11.575603409s to restartPrimaryControlPlane
	I0315 07:19:14.190907   57679 kubeadm.go:393] duration metric: took 11.628345942s to StartCluster
	I0315 07:19:14.190926   57679 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.191004   57679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:19:14.193808   57679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:19:14.194099   57679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:19:14.195837   57679 out.go:177] * Verifying Kubernetes components...
	I0315 07:19:14.194173   57679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:19:14.194370   57679 config.go:182] Loaded profile config "no-preload-184055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0315 07:19:14.197355   57679 addons.go:69] Setting default-storageclass=true in profile "no-preload-184055"
	I0315 07:19:14.197366   57679 addons.go:69] Setting metrics-server=true in profile "no-preload-184055"
	I0315 07:19:14.197371   57679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:19:14.197391   57679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184055"
	I0315 07:19:14.197399   57679 addons.go:234] Setting addon metrics-server=true in "no-preload-184055"
	W0315 07:19:14.197428   57679 addons.go:243] addon metrics-server should already be in state true
	I0315 07:19:14.197463   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197357   57679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-184055"
	I0315 07:19:14.197556   57679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-184055"
	W0315 07:19:14.197590   57679 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:19:14.197621   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.197796   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197825   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197844   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.197866   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.197992   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.198018   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.215885   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0315 07:19:14.216315   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.216835   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.216856   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.216979   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0315 07:19:14.217210   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.217401   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.217832   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.218337   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.218356   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.218835   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.219509   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.219558   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.221590   57679 addons.go:234] Setting addon default-storageclass=true in "no-preload-184055"
	W0315 07:19:14.221610   57679 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:19:14.221636   57679 host.go:66] Checking if "no-preload-184055" exists ...
	I0315 07:19:14.221988   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.222020   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.236072   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0315 07:19:14.236839   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.237394   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.237412   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.237738   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.238329   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.238366   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.240658   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0315 07:19:14.241104   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.241236   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0315 07:19:14.241983   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242011   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242132   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.242513   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.242529   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.242778   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.242874   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.243444   57679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:19:14.243491   57679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:19:14.244209   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.246070   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.248364   57679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:19:14.249720   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:19:14.249739   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:19:14.249753   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.252662   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.252782   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.252816   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.253074   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.253229   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.253350   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.253501   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.259989   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0315 07:19:14.260548   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.261729   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.261747   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.262095   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.262264   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.263730   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.265877   57679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:19:10.799308   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.631771   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.889695   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:11.952265   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:12.017835   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:19:12.017936   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:12.518851   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.018398   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.062369   56654 api_server.go:72] duration metric: took 1.044536539s to wait for apiserver process to appear ...
	I0315 07:19:13.062396   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:19:13.062418   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:13.062974   56654 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0315 07:19:13.563534   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:12.346953   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.847600   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:14.264201   57679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0315 07:19:14.267306   57679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.267326   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:19:14.267343   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.269936   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270329   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.270356   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.270481   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.270658   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.270778   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.270907   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.272808   57679 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:19:14.273370   57679 main.go:141] libmachine: Using API Version  1
	I0315 07:19:14.273402   57679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:19:14.273813   57679 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:19:14.274152   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetState
	I0315 07:19:14.275771   57679 main.go:141] libmachine: (no-preload-184055) Calling .DriverName
	I0315 07:19:14.275998   57679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.276010   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:19:14.276031   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHHostname
	I0315 07:19:14.278965   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279393   57679 main.go:141] libmachine: (no-preload-184055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:f2:82", ip: ""} in network mk-no-preload-184055: {Iface:virbr4 ExpiryTime:2024-03-15 08:12:01 +0000 UTC Type:0 Mac:52:54:00:22:f2:82 Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:no-preload-184055 Clientid:01:52:54:00:22:f2:82}
	I0315 07:19:14.279412   57679 main.go:141] libmachine: (no-preload-184055) DBG | domain no-preload-184055 has defined IP address 192.168.72.106 and MAC address 52:54:00:22:f2:82 in network mk-no-preload-184055
	I0315 07:19:14.279577   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHPort
	I0315 07:19:14.279747   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHKeyPath
	I0315 07:19:14.279903   57679 main.go:141] libmachine: (no-preload-184055) Calling .GetSSHUsername
	I0315 07:19:14.280028   57679 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/no-preload-184055/id_rsa Username:docker}
	I0315 07:19:14.441635   57679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:19:14.463243   57679 node_ready.go:35] waiting up to 6m0s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:14.593277   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:19:14.599264   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:19:14.599296   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:19:14.599400   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:19:14.634906   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:19:14.634937   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:19:14.669572   57679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:14.669600   57679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:19:14.714426   57679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:19:16.067783   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474466196s)
	I0315 07:19:16.067825   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.468392849s)
	I0315 07:19:16.067858   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067902   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.067866   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.067995   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068236   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068266   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068288   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068294   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068302   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068308   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068385   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068394   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.068404   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.068412   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.068644   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.068714   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.068738   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.069047   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.069050   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.069065   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.080621   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.080648   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.081002   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.081084   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.081129   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196048   57679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.481583571s)
	I0315 07:19:16.196097   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196116   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196369   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196392   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196400   57679 main.go:141] libmachine: Making call to close driver server
	I0315 07:19:16.196419   57679 main.go:141] libmachine: (no-preload-184055) DBG | Closing plugin on server side
	I0315 07:19:16.196453   57679 main.go:141] libmachine: (no-preload-184055) Calling .Close
	I0315 07:19:16.196718   57679 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:19:16.196732   57679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:19:16.196742   57679 addons.go:470] Verifying addon metrics-server=true in "no-preload-184055"
	I0315 07:19:16.199850   57679 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:19:12.531987   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.031467   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:13.532608   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.031933   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:14.532184   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.031988   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:15.532458   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.032228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.531459   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:17.032027   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:16.268425   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.268461   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.268492   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.289883   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 07:19:16.289913   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 07:19:16.563323   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:16.568933   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:16.568970   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.063417   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.068427   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0315 07:19:17.068479   56654 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0315 07:19:17.563043   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:19:17.577348   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:19:17.590865   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:19:17.590916   56654 api_server.go:131] duration metric: took 4.528511761s to wait for apiserver health ...
	I0315 07:19:17.590925   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:19:17.590932   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:19:17.592836   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:19:16.201125   57679 addons.go:505] duration metric: took 2.006959428s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:19:16.467672   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:17.594317   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:19:17.610268   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:19:17.645192   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:19:17.660620   56654 system_pods.go:59] 8 kube-system pods found
	I0315 07:19:17.660663   56654 system_pods.go:61] "coredns-5dd5756b68-dmphc" [805c94fa-9da4-4996-9661-ff60c8ed444b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:19:17.660674   56654 system_pods.go:61] "etcd-embed-certs-709708" [cf00b70b-b2a1-455d-9463-e6ea89a611be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 07:19:17.660685   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [5b6f30d7-ceae-4342-a7a4-b6f43ec85ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 07:19:17.660696   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [1c86778b-ae51-4483-9ed2-763f5bd6f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0315 07:19:17.660715   56654 system_pods.go:61] "kube-proxy-7shrq" [513d383b-45b4-454b-af71-2f5edce18fc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:19:17.660732   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [a9b13862-6691-438d-a6ec-ee21703fb674] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 07:19:17.660744   56654 system_pods.go:61] "metrics-server-57f55c9bc5-8bslq" [a4d0f4d5-cbcb-48e9-aaa4-600eeec5b976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:19:17.660753   56654 system_pods.go:61] "storage-provisioner" [feb5ca17-4d17-4aeb-88e6-8367b61386fe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0315 07:19:17.660765   56654 system_pods.go:74] duration metric: took 15.551886ms to wait for pod list to return data ...
	I0315 07:19:17.660777   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:19:17.669915   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:19:17.669946   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:19:17.669960   56654 node_conditions.go:105] duration metric: took 9.176702ms to run NodePressure ...
	I0315 07:19:17.669981   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 07:19:18.025295   56654 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030616   56654 kubeadm.go:733] kubelet initialised
	I0315 07:19:18.030642   56654 kubeadm.go:734] duration metric: took 5.319174ms waiting for restarted kubelet to initialise ...
	I0315 07:19:18.030651   56654 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:18.037730   56654 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.045245   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.348191   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:19.849900   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:17.532342   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.032247   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.532212   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.032065   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:19.531434   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.032010   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:20.531388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.031901   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:21.532228   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:22.031887   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:18.469495   57679 node_ready.go:53] node "no-preload-184055" has status "Ready":"False"
	I0315 07:19:20.967708   57679 node_ready.go:49] node "no-preload-184055" has status "Ready":"True"
	I0315 07:19:20.967734   57679 node_ready.go:38] duration metric: took 6.504457881s for node "no-preload-184055" to be "Ready" ...
	I0315 07:19:20.967743   57679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:19:20.976921   57679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983022   57679 pod_ready.go:92] pod "coredns-76f75df574-tc5zh" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.983048   57679 pod_ready.go:81] duration metric: took 6.093382ms for pod "coredns-76f75df574-tc5zh" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.983061   57679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989102   57679 pod_ready.go:92] pod "etcd-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.989121   57679 pod_ready.go:81] duration metric: took 6.052567ms for pod "etcd-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.989129   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999531   57679 pod_ready.go:92] pod "kube-apiserver-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:20.999553   57679 pod_ready.go:81] duration metric: took 10.41831ms for pod "kube-apiserver-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:20.999563   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.045411   56654 pod_ready.go:102] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.546044   56654 pod_ready.go:92] pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:22.546069   56654 pod_ready.go:81] duration metric: took 4.50830927s for pod "coredns-5dd5756b68-dmphc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:22.546079   56654 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:24.555268   56654 pod_ready.go:102] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.345813   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:24.845705   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:22.531856   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.032039   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.531734   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.031897   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:24.531504   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.031406   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:25.531483   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.031509   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:26.532066   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:27.032243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:23.007047   57679 pod_ready.go:92] pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.007069   57679 pod_ready.go:81] duration metric: took 2.00750108s for pod "kube-controller-manager-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.007079   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012118   57679 pod_ready.go:92] pod "kube-proxy-977jm" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:23.012139   57679 pod_ready.go:81] duration metric: took 5.055371ms for pod "kube-proxy-977jm" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:23.012150   57679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019452   57679 pod_ready.go:92] pod "kube-scheduler-no-preload-184055" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:25.019474   57679 pod_ready.go:81] duration metric: took 2.00731616s for pod "kube-scheduler-no-preload-184055" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:25.019484   57679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.028392   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.056590   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.056617   56654 pod_ready.go:81] duration metric: took 4.510531726s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.056628   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065529   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.065551   56654 pod_ready.go:81] duration metric: took 8.915701ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.065559   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074860   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.074883   56654 pod_ready.go:81] duration metric: took 9.317289ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.074891   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081676   56654 pod_ready.go:92] pod "kube-proxy-7shrq" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:27.081700   56654 pod_ready.go:81] duration metric: took 6.802205ms for pod "kube-proxy-7shrq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:27.081708   56654 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089182   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:19:28.089210   56654 pod_ready.go:81] duration metric: took 1.007494423s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:28.089219   56654 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	I0315 07:19:30.096544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:26.847134   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:29.345553   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:27.531700   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.032146   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:28.532261   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.031490   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.531402   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.031823   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:30.531717   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.031424   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:31.532225   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:32.032206   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:29.527569   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.026254   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.096930   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.097265   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:31.845510   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:34.344979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:32.531919   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.032150   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:33.532238   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.031740   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:34.532305   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.031967   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:35.532057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:35.532137   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:35.576228   57277 cri.go:89] found id: ""
	I0315 07:19:35.576259   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.576270   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:35.576278   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:35.576366   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:35.621912   57277 cri.go:89] found id: ""
	I0315 07:19:35.621945   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.621956   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:35.621992   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:35.622056   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:35.662434   57277 cri.go:89] found id: ""
	I0315 07:19:35.662465   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.662475   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:35.662482   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:35.662533   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:35.714182   57277 cri.go:89] found id: ""
	I0315 07:19:35.714215   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.714227   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:35.714235   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:35.714301   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:35.754168   57277 cri.go:89] found id: ""
	I0315 07:19:35.754195   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.754204   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:35.754209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:35.754265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:35.793345   57277 cri.go:89] found id: ""
	I0315 07:19:35.793388   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.793397   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:35.793403   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:35.793463   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:35.833188   57277 cri.go:89] found id: ""
	I0315 07:19:35.833218   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.833226   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:35.833231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:35.833289   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:35.873605   57277 cri.go:89] found id: ""
	I0315 07:19:35.873635   57277 logs.go:276] 0 containers: []
	W0315 07:19:35.873644   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:35.873652   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:35.873674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:35.926904   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:35.926942   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:35.941944   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:35.941973   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:36.067568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:36.067596   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:36.067611   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:36.132974   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:36.133016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:34.026626   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.027943   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.097702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.098868   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:36.348931   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.846239   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:38.676378   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:38.692304   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:38.692404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:38.736655   57277 cri.go:89] found id: ""
	I0315 07:19:38.736679   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.736690   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:38.736698   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:38.736756   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:38.775467   57277 cri.go:89] found id: ""
	I0315 07:19:38.775493   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.775503   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:38.775511   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:38.775580   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:38.813989   57277 cri.go:89] found id: ""
	I0315 07:19:38.814017   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.814028   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:38.814034   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:38.814095   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:38.856974   57277 cri.go:89] found id: ""
	I0315 07:19:38.856996   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.857003   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:38.857009   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:38.857055   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:38.895611   57277 cri.go:89] found id: ""
	I0315 07:19:38.895638   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.895647   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:38.895652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:38.895706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:38.934960   57277 cri.go:89] found id: ""
	I0315 07:19:38.934985   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.934992   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:38.934998   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:38.935047   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:38.990742   57277 cri.go:89] found id: ""
	I0315 07:19:38.990776   57277 logs.go:276] 0 containers: []
	W0315 07:19:38.990787   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:38.990795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:38.990860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:39.033788   57277 cri.go:89] found id: ""
	I0315 07:19:39.033812   57277 logs.go:276] 0 containers: []
	W0315 07:19:39.033820   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:39.033828   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:39.033841   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:39.109233   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:39.109252   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:39.109269   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:39.181805   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:39.181842   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:39.227921   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:39.227957   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:39.279530   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:39.279565   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:41.795735   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:41.811311   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:41.811384   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:41.857764   57277 cri.go:89] found id: ""
	I0315 07:19:41.857786   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.857817   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:41.857826   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:41.857882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:41.905243   57277 cri.go:89] found id: ""
	I0315 07:19:41.905273   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.905281   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:41.905286   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:41.905336   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:41.944650   57277 cri.go:89] found id: ""
	I0315 07:19:41.944678   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.944686   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:41.944692   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:41.944760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:41.983485   57277 cri.go:89] found id: ""
	I0315 07:19:41.983508   57277 logs.go:276] 0 containers: []
	W0315 07:19:41.983515   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:41.983521   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:41.983581   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:42.026134   57277 cri.go:89] found id: ""
	I0315 07:19:42.026160   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.026169   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:42.026176   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:42.026250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:42.063835   57277 cri.go:89] found id: ""
	I0315 07:19:42.063869   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.063879   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:42.063885   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:42.063934   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:42.104717   57277 cri.go:89] found id: ""
	I0315 07:19:42.104744   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.104752   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:42.104758   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:42.104813   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:42.153055   57277 cri.go:89] found id: ""
	I0315 07:19:42.153088   57277 logs.go:276] 0 containers: []
	W0315 07:19:42.153096   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:42.153105   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:42.153124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:42.205121   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:42.205154   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:42.220210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:42.220238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:42.305173   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:42.305197   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:42.305209   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:42.378199   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:42.378238   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:38.527055   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:41.032832   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.598598   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.097251   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.098442   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:40.846551   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:43.346243   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.347584   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:44.924009   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:44.939478   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:44.939540   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:44.983604   57277 cri.go:89] found id: ""
	I0315 07:19:44.983627   57277 logs.go:276] 0 containers: []
	W0315 07:19:44.983636   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:44.983641   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:44.983688   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:45.021573   57277 cri.go:89] found id: ""
	I0315 07:19:45.021602   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.021618   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:45.021625   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:45.021705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:45.058694   57277 cri.go:89] found id: ""
	I0315 07:19:45.058721   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.058730   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:45.058737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:45.058797   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:45.098024   57277 cri.go:89] found id: ""
	I0315 07:19:45.098052   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.098061   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:45.098067   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:45.098124   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:45.136387   57277 cri.go:89] found id: ""
	I0315 07:19:45.136417   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.136425   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:45.136431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:45.136509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:45.175479   57277 cri.go:89] found id: ""
	I0315 07:19:45.175512   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.175523   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:45.175531   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:45.175591   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:45.213566   57277 cri.go:89] found id: ""
	I0315 07:19:45.213588   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.213595   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:45.213601   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:45.213683   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:45.251954   57277 cri.go:89] found id: ""
	I0315 07:19:45.251982   57277 logs.go:276] 0 containers: []
	W0315 07:19:45.251992   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:45.252024   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:45.252039   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:45.306696   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:45.306730   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:45.321645   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:45.321673   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:45.419225   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:45.419261   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:45.419277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:45.511681   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:45.511721   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:43.526837   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:45.528440   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.106379   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.596813   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:47.845408   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:49.846294   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:48.058161   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:48.073287   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:48.073380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:48.112981   57277 cri.go:89] found id: ""
	I0315 07:19:48.113011   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.113020   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:48.113036   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:48.113082   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:48.155155   57277 cri.go:89] found id: ""
	I0315 07:19:48.155180   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.155187   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:48.155193   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:48.155248   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:48.193666   57277 cri.go:89] found id: ""
	I0315 07:19:48.193692   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.193700   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:48.193705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:48.193765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:48.235805   57277 cri.go:89] found id: ""
	I0315 07:19:48.235834   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.235845   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:48.235852   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:48.235913   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:48.275183   57277 cri.go:89] found id: ""
	I0315 07:19:48.275208   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.275216   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:48.275221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:48.275267   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:48.314166   57277 cri.go:89] found id: ""
	I0315 07:19:48.314199   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.314207   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:48.314212   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:48.314259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:48.352594   57277 cri.go:89] found id: ""
	I0315 07:19:48.352622   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.352633   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:48.352640   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:48.352697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:48.396831   57277 cri.go:89] found id: ""
	I0315 07:19:48.396857   57277 logs.go:276] 0 containers: []
	W0315 07:19:48.396868   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:48.396879   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:48.396893   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:48.451105   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:48.451141   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.467162   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:48.467199   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:48.565282   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:48.565315   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:48.565330   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:48.657737   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:48.657770   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.206068   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:51.227376   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:51.227457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:51.294309   57277 cri.go:89] found id: ""
	I0315 07:19:51.294352   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.294364   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:51.294372   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:51.294445   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:51.344526   57277 cri.go:89] found id: ""
	I0315 07:19:51.344553   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.344562   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:51.344569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:51.344628   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:51.395958   57277 cri.go:89] found id: ""
	I0315 07:19:51.395986   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.395997   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:51.396004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:51.396064   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:51.440033   57277 cri.go:89] found id: ""
	I0315 07:19:51.440056   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.440064   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:51.440069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:51.440116   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:51.477215   57277 cri.go:89] found id: ""
	I0315 07:19:51.477243   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.477254   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:51.477261   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:51.477309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:51.514899   57277 cri.go:89] found id: ""
	I0315 07:19:51.514937   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.514949   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:51.514957   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:51.515098   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:51.555763   57277 cri.go:89] found id: ""
	I0315 07:19:51.555796   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.555808   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:51.555816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:51.555888   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:51.597164   57277 cri.go:89] found id: ""
	I0315 07:19:51.597192   57277 logs.go:276] 0 containers: []
	W0315 07:19:51.597201   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:51.597212   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:51.597226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:51.676615   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:51.676642   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:51.676658   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:51.761136   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:51.761169   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:51.807329   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:51.807355   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:51.864241   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:51.864277   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:48.026480   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:50.026952   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:52.027649   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.598452   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.096541   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:51.846375   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:53.848067   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:54.380720   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:54.395231   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:54.395290   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:54.442958   57277 cri.go:89] found id: ""
	I0315 07:19:54.442988   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.442999   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:54.443007   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:54.443072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:54.486637   57277 cri.go:89] found id: ""
	I0315 07:19:54.486660   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.486670   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:54.486677   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:54.486739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:54.523616   57277 cri.go:89] found id: ""
	I0315 07:19:54.523639   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.523646   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:54.523652   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:54.523704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:54.560731   57277 cri.go:89] found id: ""
	I0315 07:19:54.560757   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.560771   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:54.560780   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:54.560840   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:54.606013   57277 cri.go:89] found id: ""
	I0315 07:19:54.606040   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.606049   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:54.606057   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:54.606111   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:54.650111   57277 cri.go:89] found id: ""
	I0315 07:19:54.650131   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.650139   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:54.650145   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:54.650211   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:54.691888   57277 cri.go:89] found id: ""
	I0315 07:19:54.691916   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.691927   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:54.691935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:54.691986   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:54.732932   57277 cri.go:89] found id: ""
	I0315 07:19:54.732957   57277 logs.go:276] 0 containers: []
	W0315 07:19:54.732969   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:54.732979   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:54.732994   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:54.789276   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:54.789312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:19:54.804505   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:54.804535   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:54.886268   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:54.886292   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:54.886312   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:54.966528   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:54.966561   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:54.526972   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.530973   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.097277   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.599271   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:56.347008   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:58.848301   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:19:57.511989   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:19:57.526208   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:19:57.526281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:19:57.567914   57277 cri.go:89] found id: ""
	I0315 07:19:57.567939   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.567946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:19:57.567952   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:19:57.568006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:19:57.609841   57277 cri.go:89] found id: ""
	I0315 07:19:57.609871   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.609883   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:19:57.609890   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:19:57.609951   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:19:57.651769   57277 cri.go:89] found id: ""
	I0315 07:19:57.651796   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.651807   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:19:57.651815   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:19:57.651881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:19:57.687389   57277 cri.go:89] found id: ""
	I0315 07:19:57.687418   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.687425   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:19:57.687432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:19:57.687483   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:19:57.725932   57277 cri.go:89] found id: ""
	I0315 07:19:57.725959   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.725968   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:19:57.725975   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:19:57.726023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:19:57.763117   57277 cri.go:89] found id: ""
	I0315 07:19:57.763147   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.763157   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:19:57.763164   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:19:57.763226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:19:57.799737   57277 cri.go:89] found id: ""
	I0315 07:19:57.799768   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.799779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:19:57.799787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:19:57.799852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:19:57.837687   57277 cri.go:89] found id: ""
	I0315 07:19:57.837710   57277 logs.go:276] 0 containers: []
	W0315 07:19:57.837718   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:19:57.837725   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:19:57.837738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:19:57.918837   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:19:57.918864   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:19:57.918880   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:19:58.002619   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:19:58.002657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:19:58.049971   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:19:58.050001   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:58.100763   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:19:58.100793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.616093   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:00.631840   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:00.631900   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:00.669276   57277 cri.go:89] found id: ""
	I0315 07:20:00.669303   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.669318   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:00.669325   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:00.669386   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:00.709521   57277 cri.go:89] found id: ""
	I0315 07:20:00.709552   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.709563   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:00.709569   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:00.709616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:00.749768   57277 cri.go:89] found id: ""
	I0315 07:20:00.749798   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.749810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:00.749818   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:00.749881   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:00.786851   57277 cri.go:89] found id: ""
	I0315 07:20:00.786925   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.786944   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:00.786955   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:00.787019   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:00.826209   57277 cri.go:89] found id: ""
	I0315 07:20:00.826238   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.826249   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:00.826258   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:00.826306   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:00.867316   57277 cri.go:89] found id: ""
	I0315 07:20:00.867341   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.867348   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:00.867354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:00.867408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:00.909167   57277 cri.go:89] found id: ""
	I0315 07:20:00.909200   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.909213   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:00.909221   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:00.909282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:00.948613   57277 cri.go:89] found id: ""
	I0315 07:20:00.948639   57277 logs.go:276] 0 containers: []
	W0315 07:20:00.948650   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:00.948663   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:00.948677   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:00.964131   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:00.964156   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:01.039138   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:01.039156   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:01.039168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:01.115080   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:01.115112   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:01.154903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:01.154931   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:19:59.026054   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.028981   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.097126   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.595864   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:01.345979   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.346201   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:03.704229   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:03.719102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:03.719161   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:03.757533   57277 cri.go:89] found id: ""
	I0315 07:20:03.757562   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.757589   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:03.757595   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:03.757648   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:03.795731   57277 cri.go:89] found id: ""
	I0315 07:20:03.795765   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.795774   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:03.795781   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:03.795842   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:03.836712   57277 cri.go:89] found id: ""
	I0315 07:20:03.836739   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.836749   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:03.836757   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:03.836823   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:03.875011   57277 cri.go:89] found id: ""
	I0315 07:20:03.875043   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.875052   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:03.875058   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:03.875115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:03.912369   57277 cri.go:89] found id: ""
	I0315 07:20:03.912396   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.912407   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:03.912414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:03.912491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:03.954473   57277 cri.go:89] found id: ""
	I0315 07:20:03.954495   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.954502   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:03.954508   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:03.954556   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:03.999738   57277 cri.go:89] found id: ""
	I0315 07:20:03.999768   57277 logs.go:276] 0 containers: []
	W0315 07:20:03.999779   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:03.999787   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:03.999852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:04.040397   57277 cri.go:89] found id: ""
	I0315 07:20:04.040419   57277 logs.go:276] 0 containers: []
	W0315 07:20:04.040427   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:04.040435   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:04.040447   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:04.095183   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:04.095226   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:04.110204   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:04.110233   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:04.184335   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:04.184361   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:04.184376   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:04.266811   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:04.266847   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:06.806506   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:06.822203   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:06.822276   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:06.863565   57277 cri.go:89] found id: ""
	I0315 07:20:06.863609   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.863623   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:06.863630   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:06.863702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:06.902098   57277 cri.go:89] found id: ""
	I0315 07:20:06.902126   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.902134   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:06.902139   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:06.902189   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:06.946732   57277 cri.go:89] found id: ""
	I0315 07:20:06.946770   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.946781   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:06.946789   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:06.946850   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:06.994866   57277 cri.go:89] found id: ""
	I0315 07:20:06.994891   57277 logs.go:276] 0 containers: []
	W0315 07:20:06.994903   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:06.994910   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:06.994969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:07.036413   57277 cri.go:89] found id: ""
	I0315 07:20:07.036438   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.036445   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:07.036451   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:07.036517   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:07.075159   57277 cri.go:89] found id: ""
	I0315 07:20:07.075184   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.075192   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:07.075199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:07.075265   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:07.115669   57277 cri.go:89] found id: ""
	I0315 07:20:07.115699   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.115707   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:07.115713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:07.115765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:07.153875   57277 cri.go:89] found id: ""
	I0315 07:20:07.153930   57277 logs.go:276] 0 containers: []
	W0315 07:20:07.153943   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:07.153958   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:07.153978   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:07.231653   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:07.231691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:07.273797   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:07.273826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:07.326740   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:07.326805   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:07.341824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:07.341856   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:07.418362   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:03.526771   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:06.026284   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.599685   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.600052   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.096969   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:05.847109   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:07.850538   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.348558   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:09.918888   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:09.935170   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:09.935237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:09.973156   57277 cri.go:89] found id: ""
	I0315 07:20:09.973185   57277 logs.go:276] 0 containers: []
	W0315 07:20:09.973197   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:09.973205   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:09.973261   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:10.014296   57277 cri.go:89] found id: ""
	I0315 07:20:10.014324   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.014332   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:10.014342   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:10.014398   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:10.053300   57277 cri.go:89] found id: ""
	I0315 07:20:10.053329   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.053338   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:10.053345   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:10.053408   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:10.096796   57277 cri.go:89] found id: ""
	I0315 07:20:10.096822   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.096830   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:10.096838   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:10.096906   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:10.138782   57277 cri.go:89] found id: ""
	I0315 07:20:10.138805   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.138815   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:10.138822   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:10.138882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:10.177251   57277 cri.go:89] found id: ""
	I0315 07:20:10.177277   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.177287   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:10.177294   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:10.177355   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:10.214735   57277 cri.go:89] found id: ""
	I0315 07:20:10.214760   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.214784   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:10.214793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:10.214865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:10.255059   57277 cri.go:89] found id: ""
	I0315 07:20:10.255083   57277 logs.go:276] 0 containers: []
	W0315 07:20:10.255091   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:10.255100   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:10.255115   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:10.310667   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:10.310704   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:10.325054   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:10.325086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:10.406056   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:10.406082   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:10.406096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:10.486796   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:10.486832   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:08.027865   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:10.527476   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.527954   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.597812   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.602059   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:12.846105   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:14.846743   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:13.030542   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:13.044863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:13.044928   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:13.082856   57277 cri.go:89] found id: ""
	I0315 07:20:13.082881   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.082889   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:13.082895   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:13.082953   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:13.122413   57277 cri.go:89] found id: ""
	I0315 07:20:13.122437   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.122448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:13.122455   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:13.122515   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:13.161726   57277 cri.go:89] found id: ""
	I0315 07:20:13.161753   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.161763   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:13.161771   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:13.161830   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:13.200647   57277 cri.go:89] found id: ""
	I0315 07:20:13.200677   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.200688   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:13.200697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:13.200759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:13.238945   57277 cri.go:89] found id: ""
	I0315 07:20:13.238972   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.238980   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:13.238986   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:13.239048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:13.277257   57277 cri.go:89] found id: ""
	I0315 07:20:13.277288   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.277298   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:13.277305   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:13.277368   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:13.318135   57277 cri.go:89] found id: ""
	I0315 07:20:13.318168   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.318200   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:13.318209   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:13.318257   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:13.357912   57277 cri.go:89] found id: ""
	I0315 07:20:13.357938   57277 logs.go:276] 0 containers: []
	W0315 07:20:13.357946   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:13.357954   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:13.357964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:13.431470   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:13.431496   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:13.431511   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:13.519085   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:13.519125   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:13.560765   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:13.560793   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:13.616601   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:13.616636   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.131877   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:16.147546   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:16.147615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:16.190496   57277 cri.go:89] found id: ""
	I0315 07:20:16.190521   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.190530   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:16.190536   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:16.190613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:16.230499   57277 cri.go:89] found id: ""
	I0315 07:20:16.230531   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.230542   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:16.230550   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:16.230613   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:16.271190   57277 cri.go:89] found id: ""
	I0315 07:20:16.271217   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.271225   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:16.271230   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:16.271275   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:16.309316   57277 cri.go:89] found id: ""
	I0315 07:20:16.309338   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.309349   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:16.309357   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:16.309421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:16.347784   57277 cri.go:89] found id: ""
	I0315 07:20:16.347814   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.347824   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:16.347831   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:16.347887   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:16.386883   57277 cri.go:89] found id: ""
	I0315 07:20:16.386912   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.386921   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:16.386929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:16.386997   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:16.424147   57277 cri.go:89] found id: ""
	I0315 07:20:16.424178   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.424194   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:16.424201   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:16.424260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:16.462759   57277 cri.go:89] found id: ""
	I0315 07:20:16.462790   57277 logs.go:276] 0 containers: []
	W0315 07:20:16.462801   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:16.462812   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:16.462826   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:16.477614   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:16.477639   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:16.551703   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:16.551719   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:16.551731   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:16.639644   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:16.639691   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:16.686504   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:16.686530   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:15.026385   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.527318   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:17.098077   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.601153   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:16.847560   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.346641   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:19.239628   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:19.256218   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:19.256294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:19.294615   57277 cri.go:89] found id: ""
	I0315 07:20:19.294649   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.294657   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:19.294663   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:19.294721   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:19.333629   57277 cri.go:89] found id: ""
	I0315 07:20:19.333654   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.333665   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:19.333672   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:19.333741   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:19.373976   57277 cri.go:89] found id: ""
	I0315 07:20:19.374005   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.374015   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:19.374023   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:19.374081   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:19.409962   57277 cri.go:89] found id: ""
	I0315 07:20:19.409991   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.410013   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:19.410022   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:19.410084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:19.456671   57277 cri.go:89] found id: ""
	I0315 07:20:19.456701   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.456711   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:19.456719   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:19.456784   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:19.521931   57277 cri.go:89] found id: ""
	I0315 07:20:19.521960   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.521970   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:19.521978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:19.522038   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:19.561202   57277 cri.go:89] found id: ""
	I0315 07:20:19.561231   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.561244   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:19.561251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:19.561298   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:19.603940   57277 cri.go:89] found id: ""
	I0315 07:20:19.603964   57277 logs.go:276] 0 containers: []
	W0315 07:20:19.603975   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:19.603985   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:19.603999   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:19.659334   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:19.659367   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:19.674058   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:19.674086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:19.750018   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:19.750046   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:19.750078   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:19.834341   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:19.834401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:22.379055   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:22.394515   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:22.394587   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:22.432914   57277 cri.go:89] found id: ""
	I0315 07:20:22.432945   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.432956   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:22.432964   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:22.433020   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:22.472827   57277 cri.go:89] found id: ""
	I0315 07:20:22.472856   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.472867   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:22.472875   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:22.472936   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:20.026864   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.028043   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.096335   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.097100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:21.846022   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:24.348574   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:22.513376   57277 cri.go:89] found id: ""
	I0315 07:20:22.513405   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.513416   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:22.513424   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:22.513499   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:22.554899   57277 cri.go:89] found id: ""
	I0315 07:20:22.554926   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.554936   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:22.554945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:22.555008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:22.593188   57277 cri.go:89] found id: ""
	I0315 07:20:22.593217   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.593228   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:22.593238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:22.593319   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:22.635671   57277 cri.go:89] found id: ""
	I0315 07:20:22.635696   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.635707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:22.635715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:22.635775   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:22.676161   57277 cri.go:89] found id: ""
	I0315 07:20:22.676192   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.676199   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:22.676205   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:22.676252   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:22.719714   57277 cri.go:89] found id: ""
	I0315 07:20:22.719745   57277 logs.go:276] 0 containers: []
	W0315 07:20:22.719756   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:22.719767   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:22.719782   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:22.770189   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:22.770221   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:22.784568   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:22.784595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:22.866647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:22.866688   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:22.866703   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:22.947794   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:22.947829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.492492   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:25.507514   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:25.507594   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:25.549505   57277 cri.go:89] found id: ""
	I0315 07:20:25.549532   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.549540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:25.549547   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:25.549609   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:25.590718   57277 cri.go:89] found id: ""
	I0315 07:20:25.590745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.590756   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:25.590763   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:25.590821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:25.632345   57277 cri.go:89] found id: ""
	I0315 07:20:25.632375   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.632385   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:25.632392   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:25.632457   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:25.674714   57277 cri.go:89] found id: ""
	I0315 07:20:25.674745   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.674754   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:25.674760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:25.674807   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:25.714587   57277 cri.go:89] found id: ""
	I0315 07:20:25.714626   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.714636   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:25.714644   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:25.714704   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:25.752190   57277 cri.go:89] found id: ""
	I0315 07:20:25.752219   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.752229   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:25.752238   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:25.752293   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:25.790923   57277 cri.go:89] found id: ""
	I0315 07:20:25.790956   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.790964   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:25.790973   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:25.791029   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:25.830924   57277 cri.go:89] found id: ""
	I0315 07:20:25.830951   57277 logs.go:276] 0 containers: []
	W0315 07:20:25.830959   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:25.830967   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:25.830979   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:25.908873   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:25.908905   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:25.958522   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:25.958559   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:26.011304   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:26.011340   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:26.026210   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:26.026236   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:26.096875   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:24.526233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:27.026786   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.598148   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:29.099229   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:26.846029   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.847088   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:28.597246   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:28.612564   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:28.612640   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:28.649913   57277 cri.go:89] found id: ""
	I0315 07:20:28.649939   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.649950   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:28.649958   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:28.650023   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:28.690556   57277 cri.go:89] found id: ""
	I0315 07:20:28.690584   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.690599   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:28.690608   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:28.690685   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:28.729895   57277 cri.go:89] found id: ""
	I0315 07:20:28.729927   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.729940   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:28.729948   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:28.730009   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:28.766894   57277 cri.go:89] found id: ""
	I0315 07:20:28.766931   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.766942   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:28.766950   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:28.767008   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:28.805894   57277 cri.go:89] found id: ""
	I0315 07:20:28.805916   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.805924   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:28.805929   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:28.806002   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:28.844944   57277 cri.go:89] found id: ""
	I0315 07:20:28.844983   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.844995   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:28.845018   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:28.845078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:28.885153   57277 cri.go:89] found id: ""
	I0315 07:20:28.885186   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.885197   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:28.885206   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:28.885263   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:28.922534   57277 cri.go:89] found id: ""
	I0315 07:20:28.922581   57277 logs.go:276] 0 containers: []
	W0315 07:20:28.922591   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:28.922601   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:28.922621   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:28.973076   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:28.973109   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:28.989283   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:28.989310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:29.067063   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:29.067085   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:29.067100   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:29.145316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:29.145351   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:31.687137   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:31.703447   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:31.703520   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:31.742949   57277 cri.go:89] found id: ""
	I0315 07:20:31.742976   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.742984   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:31.742989   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:31.743046   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:31.784965   57277 cri.go:89] found id: ""
	I0315 07:20:31.784994   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.785004   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:31.785012   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:31.785087   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:31.825242   57277 cri.go:89] found id: ""
	I0315 07:20:31.825266   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.825275   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:31.825281   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:31.825327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:31.867216   57277 cri.go:89] found id: ""
	I0315 07:20:31.867248   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.867261   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:31.867269   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:31.867332   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:31.932729   57277 cri.go:89] found id: ""
	I0315 07:20:31.932758   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.932769   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:31.932778   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:31.932831   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:31.997391   57277 cri.go:89] found id: ""
	I0315 07:20:31.997419   57277 logs.go:276] 0 containers: []
	W0315 07:20:31.997429   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:31.997437   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:31.997500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:32.048426   57277 cri.go:89] found id: ""
	I0315 07:20:32.048453   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.048479   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:32.048488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:32.048542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:32.088211   57277 cri.go:89] found id: ""
	I0315 07:20:32.088240   57277 logs.go:276] 0 containers: []
	W0315 07:20:32.088248   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:32.088255   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:32.088267   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:32.144664   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:32.144700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:32.161982   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:32.162015   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:32.238343   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:32.238368   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:32.238386   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:32.316692   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:32.316732   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:29.027562   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.028426   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:31.598245   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.097665   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:30.847325   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:33.345680   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:35.346592   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:34.866369   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:34.880816   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:34.880895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:34.925359   57277 cri.go:89] found id: ""
	I0315 07:20:34.925399   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.925411   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:34.925419   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:34.925482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:34.968121   57277 cri.go:89] found id: ""
	I0315 07:20:34.968152   57277 logs.go:276] 0 containers: []
	W0315 07:20:34.968163   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:34.968170   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:34.968233   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:35.007243   57277 cri.go:89] found id: ""
	I0315 07:20:35.007273   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.007281   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:35.007291   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:35.007352   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:35.045468   57277 cri.go:89] found id: ""
	I0315 07:20:35.045492   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.045500   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:35.045505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:35.045553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:35.084764   57277 cri.go:89] found id: ""
	I0315 07:20:35.084790   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.084801   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:35.084808   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:35.084869   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:35.124356   57277 cri.go:89] found id: ""
	I0315 07:20:35.124391   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.124404   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:35.124413   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:35.124492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:35.161308   57277 cri.go:89] found id: ""
	I0315 07:20:35.161341   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.161348   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:35.161354   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:35.161419   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:35.202157   57277 cri.go:89] found id: ""
	I0315 07:20:35.202183   57277 logs.go:276] 0 containers: []
	W0315 07:20:35.202194   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:35.202204   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:35.202223   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:35.252206   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:35.252243   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:35.268406   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:35.268436   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:35.347194   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:35.347219   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:35.347232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:35.421316   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:35.421350   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:33.527094   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.026803   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:36.600818   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.096329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.346621   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:39.846377   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:37.963658   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:37.985595   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:37.985669   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:38.030827   57277 cri.go:89] found id: ""
	I0315 07:20:38.030948   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.030966   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:38.030979   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:38.031051   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:38.070360   57277 cri.go:89] found id: ""
	I0315 07:20:38.070404   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.070415   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:38.070423   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:38.070486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:38.109929   57277 cri.go:89] found id: ""
	I0315 07:20:38.109960   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.109971   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:38.109979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:38.110040   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:38.150040   57277 cri.go:89] found id: ""
	I0315 07:20:38.150074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.150082   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:38.150088   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:38.150164   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:38.192283   57277 cri.go:89] found id: ""
	I0315 07:20:38.192308   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.192315   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:38.192321   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:38.192404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:38.237046   57277 cri.go:89] found id: ""
	I0315 07:20:38.237074   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.237085   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:38.237092   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:38.237160   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:38.277396   57277 cri.go:89] found id: ""
	I0315 07:20:38.277425   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.277436   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:38.277443   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:38.277498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:38.316703   57277 cri.go:89] found id: ""
	I0315 07:20:38.316732   57277 logs.go:276] 0 containers: []
	W0315 07:20:38.316741   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:38.316750   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:38.316762   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.360325   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:38.360354   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:38.416980   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:38.417016   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:38.431649   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:38.431674   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:38.512722   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:38.512750   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:38.512766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.103071   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:41.117295   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:41.117376   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:41.161189   57277 cri.go:89] found id: ""
	I0315 07:20:41.161214   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.161221   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:41.161228   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:41.161287   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:41.203519   57277 cri.go:89] found id: ""
	I0315 07:20:41.203545   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.203552   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:41.203558   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:41.203611   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:41.246466   57277 cri.go:89] found id: ""
	I0315 07:20:41.246491   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.246499   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:41.246505   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:41.246564   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:41.291206   57277 cri.go:89] found id: ""
	I0315 07:20:41.291229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.291237   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:41.291243   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:41.291304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:41.333273   57277 cri.go:89] found id: ""
	I0315 07:20:41.333299   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.333307   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:41.333313   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:41.333361   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:41.378974   57277 cri.go:89] found id: ""
	I0315 07:20:41.379002   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.379013   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:41.379020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:41.379076   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:41.427203   57277 cri.go:89] found id: ""
	I0315 07:20:41.427229   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.427239   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:41.427248   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:41.427316   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:41.472217   57277 cri.go:89] found id: ""
	I0315 07:20:41.472251   57277 logs.go:276] 0 containers: []
	W0315 07:20:41.472261   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:41.472272   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:41.472291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:41.528894   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:41.528928   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:41.544968   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:41.545003   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:41.621382   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:41.621408   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:41.621430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:41.706694   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:41.706729   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:38.027263   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:40.027361   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.526405   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:41.597702   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:43.598129   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:42.345534   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.347018   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:44.252415   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:44.267679   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:44.267739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:44.305349   57277 cri.go:89] found id: ""
	I0315 07:20:44.305376   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.305383   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:44.305390   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:44.305448   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:44.345386   57277 cri.go:89] found id: ""
	I0315 07:20:44.345413   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.345425   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:44.345432   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:44.345492   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.389150   57277 cri.go:89] found id: ""
	I0315 07:20:44.389180   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.389191   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:44.389199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:44.389255   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:44.427161   57277 cri.go:89] found id: ""
	I0315 07:20:44.427189   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.427202   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:44.427210   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:44.427259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:44.468317   57277 cri.go:89] found id: ""
	I0315 07:20:44.468343   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.468353   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:44.468360   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:44.468420   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:44.511984   57277 cri.go:89] found id: ""
	I0315 07:20:44.512015   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.512026   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:44.512033   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:44.512092   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:44.554376   57277 cri.go:89] found id: ""
	I0315 07:20:44.554404   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.554414   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:44.554421   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:44.554488   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:44.593658   57277 cri.go:89] found id: ""
	I0315 07:20:44.593684   57277 logs.go:276] 0 containers: []
	W0315 07:20:44.593695   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:44.593706   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:44.593722   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:44.609780   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:44.609812   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:44.689696   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:44.689739   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:44.689759   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:44.769358   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:44.769396   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:44.812832   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:44.812867   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.367953   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:47.382792   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:47.382860   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:47.421374   57277 cri.go:89] found id: ""
	I0315 07:20:47.421406   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.421417   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:47.421425   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:47.421484   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:47.459155   57277 cri.go:89] found id: ""
	I0315 07:20:47.459186   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.459194   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:47.459200   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:47.459259   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:44.528381   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.026113   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.096579   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.096638   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:46.845792   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:48.846525   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:47.502719   57277 cri.go:89] found id: ""
	I0315 07:20:47.502744   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.502754   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:47.502762   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:47.502820   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:47.548388   57277 cri.go:89] found id: ""
	I0315 07:20:47.548415   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.548426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:47.548438   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:47.548500   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:47.587502   57277 cri.go:89] found id: ""
	I0315 07:20:47.587526   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.587534   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:47.587540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:47.587605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:47.633661   57277 cri.go:89] found id: ""
	I0315 07:20:47.633689   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.633700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:47.633708   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:47.633776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:47.680492   57277 cri.go:89] found id: ""
	I0315 07:20:47.680524   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.680535   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:47.680543   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:47.680603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:47.725514   57277 cri.go:89] found id: ""
	I0315 07:20:47.725537   57277 logs.go:276] 0 containers: []
	W0315 07:20:47.725545   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:47.725554   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:47.725567   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:47.779396   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:47.779430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:47.794431   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:47.794461   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:47.876179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:47.876202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:47.876217   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:47.958413   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:47.958448   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.502388   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:50.517797   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:50.517855   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:50.563032   57277 cri.go:89] found id: ""
	I0315 07:20:50.563057   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.563065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:50.563070   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:50.563140   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:50.605578   57277 cri.go:89] found id: ""
	I0315 07:20:50.605602   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.605612   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:50.605619   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:50.605710   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:50.645706   57277 cri.go:89] found id: ""
	I0315 07:20:50.645731   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.645748   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:50.645756   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:50.645825   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:50.688298   57277 cri.go:89] found id: ""
	I0315 07:20:50.688326   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.688337   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:50.688349   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:50.688404   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:50.727038   57277 cri.go:89] found id: ""
	I0315 07:20:50.727067   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.727079   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:50.727086   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:50.727146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:50.764671   57277 cri.go:89] found id: ""
	I0315 07:20:50.764693   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.764700   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:50.764706   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:50.764760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:50.805791   57277 cri.go:89] found id: ""
	I0315 07:20:50.805822   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.805830   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:50.805836   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:50.805895   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:50.844230   57277 cri.go:89] found id: ""
	I0315 07:20:50.844256   57277 logs.go:276] 0 containers: []
	W0315 07:20:50.844265   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:50.844276   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:50.844292   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:50.885139   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:50.885164   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:50.939212   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:50.939249   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:50.954230   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:50.954255   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:51.035305   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:51.035325   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:51.035339   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:49.028824   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:51.033271   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.597584   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:52.599592   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:54.599664   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:50.847370   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.346453   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.346610   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:53.622318   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:53.637642   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:53.637726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:53.677494   57277 cri.go:89] found id: ""
	I0315 07:20:53.677526   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.677534   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:53.677540   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:53.677603   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:53.716321   57277 cri.go:89] found id: ""
	I0315 07:20:53.716347   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.716362   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:53.716368   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:53.716417   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:53.755200   57277 cri.go:89] found id: ""
	I0315 07:20:53.755229   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.755238   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:53.755245   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:53.755294   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:53.793810   57277 cri.go:89] found id: ""
	I0315 07:20:53.793840   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.793848   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:53.793855   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:53.793912   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:53.840951   57277 cri.go:89] found id: ""
	I0315 07:20:53.840977   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.840984   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:53.840991   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:53.841053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:53.887793   57277 cri.go:89] found id: ""
	I0315 07:20:53.887826   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.887833   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:53.887851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:53.887904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:53.930691   57277 cri.go:89] found id: ""
	I0315 07:20:53.930723   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.930731   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:53.930737   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:53.930812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:53.967119   57277 cri.go:89] found id: ""
	I0315 07:20:53.967146   57277 logs.go:276] 0 containers: []
	W0315 07:20:53.967155   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:53.967166   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:53.967181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:54.020540   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:54.020575   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:54.036514   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:54.036548   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:54.118168   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:54.118191   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:54.118204   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:54.195793   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:54.195829   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:56.741323   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:56.756743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:56.756801   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:56.794099   57277 cri.go:89] found id: ""
	I0315 07:20:56.794131   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.794139   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:56.794145   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:56.794195   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:56.835757   57277 cri.go:89] found id: ""
	I0315 07:20:56.835837   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.835862   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:56.835879   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:56.835943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:56.874157   57277 cri.go:89] found id: ""
	I0315 07:20:56.874180   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.874187   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:56.874193   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:56.874237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:20:56.917262   57277 cri.go:89] found id: ""
	I0315 07:20:56.917290   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.917301   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:20:56.917310   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:20:56.917371   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:20:56.959293   57277 cri.go:89] found id: ""
	I0315 07:20:56.959316   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.959326   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:20:56.959339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:20:56.959385   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:20:56.997710   57277 cri.go:89] found id: ""
	I0315 07:20:56.997734   57277 logs.go:276] 0 containers: []
	W0315 07:20:56.997742   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:20:56.997748   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:20:56.997806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:20:57.038379   57277 cri.go:89] found id: ""
	I0315 07:20:57.038404   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.038411   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:20:57.038417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:20:57.038475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:20:57.077052   57277 cri.go:89] found id: ""
	I0315 07:20:57.077079   57277 logs.go:276] 0 containers: []
	W0315 07:20:57.077087   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:20:57.077097   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:20:57.077121   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:20:57.154402   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:20:57.154427   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:20:57.154442   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:57.237115   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:20:57.237153   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:20:57.278608   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:20:57.278642   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:20:57.333427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:20:57.333512   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:20:53.526558   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:55.527917   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.096886   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.596353   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:57.845438   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.846398   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:20:59.850192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:20:59.864883   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:20:59.864960   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:20:59.903821   57277 cri.go:89] found id: ""
	I0315 07:20:59.903847   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.903855   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:20:59.903861   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:20:59.903911   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:20:59.941859   57277 cri.go:89] found id: ""
	I0315 07:20:59.941889   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.941900   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:20:59.941908   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:20:59.941969   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:20:59.983652   57277 cri.go:89] found id: ""
	I0315 07:20:59.983678   57277 logs.go:276] 0 containers: []
	W0315 07:20:59.983688   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:20:59.983696   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:20:59.983757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:00.024881   57277 cri.go:89] found id: ""
	I0315 07:21:00.024904   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.024913   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:00.024922   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:00.024977   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:00.062967   57277 cri.go:89] found id: ""
	I0315 07:21:00.062990   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.062998   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:00.063004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:00.063068   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:00.107266   57277 cri.go:89] found id: ""
	I0315 07:21:00.107293   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.107302   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:00.107308   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:00.107367   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:00.150691   57277 cri.go:89] found id: ""
	I0315 07:21:00.150713   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.150723   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:00.150731   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:00.150791   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:00.190084   57277 cri.go:89] found id: ""
	I0315 07:21:00.190113   57277 logs.go:276] 0 containers: []
	W0315 07:21:00.190121   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:00.190129   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:00.190148   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:00.241282   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:00.241310   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:00.309325   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:00.309356   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:00.323486   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:00.323510   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:00.403916   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:00.403935   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:00.403949   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:20:58.026164   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:00.026742   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.526967   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:01.598405   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:03.599012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.345122   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:04.346032   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:02.987607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:03.004147   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:03.004214   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:03.042690   57277 cri.go:89] found id: ""
	I0315 07:21:03.042717   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.042728   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:03.042736   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:03.042795   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:03.082024   57277 cri.go:89] found id: ""
	I0315 07:21:03.082058   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.082068   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:03.082075   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:03.082139   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:03.122631   57277 cri.go:89] found id: ""
	I0315 07:21:03.122658   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.122666   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:03.122672   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:03.122722   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:03.160143   57277 cri.go:89] found id: ""
	I0315 07:21:03.160168   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.160179   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:03.160188   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:03.160250   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:03.196882   57277 cri.go:89] found id: ""
	I0315 07:21:03.196906   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.196917   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:03.196924   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:03.196984   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:03.237859   57277 cri.go:89] found id: ""
	I0315 07:21:03.237883   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.237890   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:03.237896   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:03.237943   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:03.276044   57277 cri.go:89] found id: ""
	I0315 07:21:03.276068   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.276077   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:03.276083   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:03.276129   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:03.318978   57277 cri.go:89] found id: ""
	I0315 07:21:03.319004   57277 logs.go:276] 0 containers: []
	W0315 07:21:03.319013   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:03.319026   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:03.319037   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:03.373052   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:03.373085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:03.387565   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:03.387597   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:03.471568   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:03.471588   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:03.471603   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:03.554617   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:03.554657   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.101350   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:06.116516   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:06.116596   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:06.157539   57277 cri.go:89] found id: ""
	I0315 07:21:06.157570   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.157582   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:06.157593   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:06.157659   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:06.197832   57277 cri.go:89] found id: ""
	I0315 07:21:06.197860   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.197870   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:06.197878   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:06.197938   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:06.255110   57277 cri.go:89] found id: ""
	I0315 07:21:06.255134   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.255141   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:06.255155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:06.255215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:06.294722   57277 cri.go:89] found id: ""
	I0315 07:21:06.294749   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.294760   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:06.294768   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:06.294829   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:06.334034   57277 cri.go:89] found id: ""
	I0315 07:21:06.334056   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.334063   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:06.334068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:06.334115   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:06.374217   57277 cri.go:89] found id: ""
	I0315 07:21:06.374256   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.374267   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:06.374275   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:06.374354   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:06.410441   57277 cri.go:89] found id: ""
	I0315 07:21:06.410467   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.410478   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:06.410485   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:06.410542   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:06.452315   57277 cri.go:89] found id: ""
	I0315 07:21:06.452339   57277 logs.go:276] 0 containers: []
	W0315 07:21:06.452347   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:06.452355   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:06.452370   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:06.465758   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:06.465786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:06.539892   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:06.539917   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:06.539937   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:06.625929   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:06.625964   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:06.671703   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:06.671739   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:04.532730   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:07.026933   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.097375   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.597321   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:06.346443   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:08.845949   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:09.225650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:09.241414   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:09.241486   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:09.281262   57277 cri.go:89] found id: ""
	I0315 07:21:09.281289   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.281300   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:09.281308   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:09.281365   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:09.323769   57277 cri.go:89] found id: ""
	I0315 07:21:09.323796   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.323807   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:09.323814   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:09.323876   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:09.366002   57277 cri.go:89] found id: ""
	I0315 07:21:09.366031   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.366041   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:09.366049   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:09.366112   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.404544   57277 cri.go:89] found id: ""
	I0315 07:21:09.404569   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.404579   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:09.404586   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:09.404649   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:09.443559   57277 cri.go:89] found id: ""
	I0315 07:21:09.443586   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.443595   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:09.443603   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:09.443665   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:09.482250   57277 cri.go:89] found id: ""
	I0315 07:21:09.482276   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.482283   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:09.482289   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:09.482347   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:09.519378   57277 cri.go:89] found id: ""
	I0315 07:21:09.519405   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.519416   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:09.519423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:09.519475   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:09.563710   57277 cri.go:89] found id: ""
	I0315 07:21:09.563733   57277 logs.go:276] 0 containers: []
	W0315 07:21:09.563740   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:09.563748   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:09.563760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:09.578824   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:09.578851   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:09.668036   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:09.668069   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:09.668085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:09.749658   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:09.749702   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:09.794182   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:09.794208   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:12.349662   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:12.365671   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:12.365760   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:12.406879   57277 cri.go:89] found id: ""
	I0315 07:21:12.406911   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.406921   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:12.406929   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:12.406992   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:12.449773   57277 cri.go:89] found id: ""
	I0315 07:21:12.449801   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.449813   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:12.449821   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:12.449884   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:12.487528   57277 cri.go:89] found id: ""
	I0315 07:21:12.487550   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.487557   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:12.487563   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:12.487615   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:09.027909   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.526310   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:11.095784   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.596876   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:10.850575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:13.345644   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.347575   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:12.528140   57277 cri.go:89] found id: ""
	I0315 07:21:12.528170   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.528177   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:12.528187   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:12.528232   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:12.568113   57277 cri.go:89] found id: ""
	I0315 07:21:12.568135   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.568149   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:12.568155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:12.568260   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:12.609572   57277 cri.go:89] found id: ""
	I0315 07:21:12.609591   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.609598   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:12.609604   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:12.609663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:12.673620   57277 cri.go:89] found id: ""
	I0315 07:21:12.673647   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.673655   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:12.673662   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:12.673726   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:12.742096   57277 cri.go:89] found id: ""
	I0315 07:21:12.742116   57277 logs.go:276] 0 containers: []
	W0315 07:21:12.742124   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:12.742132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:12.742144   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:12.767451   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:12.767490   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:12.843250   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:12.843277   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:12.843291   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:12.923728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:12.923768   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:12.965503   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:12.965533   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.521670   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:15.537479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:15.537531   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:15.577487   57277 cri.go:89] found id: ""
	I0315 07:21:15.577513   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.577521   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:15.577527   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:15.577585   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:15.618397   57277 cri.go:89] found id: ""
	I0315 07:21:15.618423   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.618433   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:15.618439   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:15.618501   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:15.655733   57277 cri.go:89] found id: ""
	I0315 07:21:15.655756   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.655764   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:15.655770   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:15.655821   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:15.692756   57277 cri.go:89] found id: ""
	I0315 07:21:15.692784   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.692795   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:15.692804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:15.692865   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:15.740183   57277 cri.go:89] found id: ""
	I0315 07:21:15.740201   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.740209   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:15.740214   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:15.740282   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:15.778932   57277 cri.go:89] found id: ""
	I0315 07:21:15.778960   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.778971   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:15.778979   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:15.779042   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:15.818016   57277 cri.go:89] found id: ""
	I0315 07:21:15.818043   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.818060   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:15.818068   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:15.818133   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:15.857027   57277 cri.go:89] found id: ""
	I0315 07:21:15.857062   57277 logs.go:276] 0 containers: []
	W0315 07:21:15.857071   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:15.857082   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:15.857096   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:15.909785   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:15.909820   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:15.924390   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:15.924426   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:16.003227   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:16.003245   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:16.003256   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:16.083609   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:16.083655   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:13.526871   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.527209   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.527517   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:15.599554   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.599839   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:20.095518   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:17.845609   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:19.845963   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:18.629101   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:18.654417   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:18.654482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:18.717086   57277 cri.go:89] found id: ""
	I0315 07:21:18.717110   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.717120   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:18.717129   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:18.717190   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:18.762701   57277 cri.go:89] found id: ""
	I0315 07:21:18.762733   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.762744   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:18.762752   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:18.762812   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:18.800311   57277 cri.go:89] found id: ""
	I0315 07:21:18.800342   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.800353   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:18.800361   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:18.800421   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:18.839657   57277 cri.go:89] found id: ""
	I0315 07:21:18.839691   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.839701   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:18.839709   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:18.839759   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:18.880328   57277 cri.go:89] found id: ""
	I0315 07:21:18.880350   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.880357   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:18.880363   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:18.880415   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:18.919798   57277 cri.go:89] found id: ""
	I0315 07:21:18.919819   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.919826   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:18.919832   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:18.919882   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:18.957906   57277 cri.go:89] found id: ""
	I0315 07:21:18.957929   57277 logs.go:276] 0 containers: []
	W0315 07:21:18.957939   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:18.957945   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:18.957991   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:19.000240   57277 cri.go:89] found id: ""
	I0315 07:21:19.000267   57277 logs.go:276] 0 containers: []
	W0315 07:21:19.000276   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:19.000286   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:19.000302   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:19.083474   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:19.083507   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:19.127996   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:19.128028   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:19.181464   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:19.181494   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:19.195722   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:19.195751   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:19.270066   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:21.770778   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:21.785657   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:21.785717   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:21.823460   57277 cri.go:89] found id: ""
	I0315 07:21:21.823488   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.823498   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:21.823506   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:21.823553   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:21.863114   57277 cri.go:89] found id: ""
	I0315 07:21:21.863140   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.863147   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:21.863153   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:21.863201   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:21.900101   57277 cri.go:89] found id: ""
	I0315 07:21:21.900142   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.900152   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:21.900159   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:21.900216   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:21.941543   57277 cri.go:89] found id: ""
	I0315 07:21:21.941571   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.941583   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:21.941589   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:21.941653   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:21.976831   57277 cri.go:89] found id: ""
	I0315 07:21:21.976862   57277 logs.go:276] 0 containers: []
	W0315 07:21:21.976873   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:21.976881   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:21.976950   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:22.014104   57277 cri.go:89] found id: ""
	I0315 07:21:22.014136   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.014147   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:22.014155   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:22.014217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:22.051615   57277 cri.go:89] found id: ""
	I0315 07:21:22.051638   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.051647   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:22.051653   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:22.051705   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:22.093285   57277 cri.go:89] found id: ""
	I0315 07:21:22.093312   57277 logs.go:276] 0 containers: []
	W0315 07:21:22.093322   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:22.093333   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:22.093347   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:22.150193   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:22.150224   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:22.164296   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:22.164323   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:22.244749   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:22.244774   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:22.244788   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:22.332575   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:22.332610   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:20.026267   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.027057   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:22.097983   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.098158   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:21.846591   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.346427   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:24.878079   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:24.892501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:24.893032   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:24.934022   57277 cri.go:89] found id: ""
	I0315 07:21:24.934054   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.934065   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:24.934074   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:24.934146   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:24.971697   57277 cri.go:89] found id: ""
	I0315 07:21:24.971728   57277 logs.go:276] 0 containers: []
	W0315 07:21:24.971739   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:24.971746   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:24.971817   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:25.008439   57277 cri.go:89] found id: ""
	I0315 07:21:25.008471   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.008483   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:25.008492   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:25.008605   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:25.043974   57277 cri.go:89] found id: ""
	I0315 07:21:25.044000   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.044008   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:25.044013   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:25.044072   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:25.084027   57277 cri.go:89] found id: ""
	I0315 07:21:25.084059   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.084071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:25.084080   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:25.084143   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:25.121024   57277 cri.go:89] found id: ""
	I0315 07:21:25.121050   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.121058   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:25.121064   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:25.121121   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:25.156159   57277 cri.go:89] found id: ""
	I0315 07:21:25.156185   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.156193   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:25.156199   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:25.156266   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:25.194058   57277 cri.go:89] found id: ""
	I0315 07:21:25.194087   57277 logs.go:276] 0 containers: []
	W0315 07:21:25.194105   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:25.194116   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:25.194130   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:25.247659   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:25.247694   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:25.262893   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:25.262922   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:25.333535   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:25.333559   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:25.333574   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:25.415728   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:25.415767   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:24.027470   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.526084   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.596504   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.597100   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:26.845238   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:28.845333   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:27.962319   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:27.976978   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:27.977053   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:28.014838   57277 cri.go:89] found id: ""
	I0315 07:21:28.014869   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.014880   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:28.014889   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:28.014935   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:28.056697   57277 cri.go:89] found id: ""
	I0315 07:21:28.056727   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.056738   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:28.056744   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:28.056803   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:28.099162   57277 cri.go:89] found id: ""
	I0315 07:21:28.099185   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.099195   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:28.099202   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:28.099262   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:28.138846   57277 cri.go:89] found id: ""
	I0315 07:21:28.138871   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.138880   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:28.138887   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:28.138939   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:28.184532   57277 cri.go:89] found id: ""
	I0315 07:21:28.184556   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.184564   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:28.184570   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:28.184616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:28.220660   57277 cri.go:89] found id: ""
	I0315 07:21:28.220693   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.220704   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:28.220713   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:28.220778   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:28.258539   57277 cri.go:89] found id: ""
	I0315 07:21:28.258564   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.258574   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:28.258581   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:28.258643   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:28.297382   57277 cri.go:89] found id: ""
	I0315 07:21:28.297411   57277 logs.go:276] 0 containers: []
	W0315 07:21:28.297422   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:28.297434   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:28.297450   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:28.382230   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:28.382263   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:28.426274   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:28.426301   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.476612   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:28.476646   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:28.492455   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:28.492513   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:28.565876   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.066284   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:31.079538   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:31.079614   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:31.119314   57277 cri.go:89] found id: ""
	I0315 07:21:31.119336   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.119344   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:31.119349   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:31.119400   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:31.159847   57277 cri.go:89] found id: ""
	I0315 07:21:31.159878   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.159886   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:31.159893   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:31.159940   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:31.200716   57277 cri.go:89] found id: ""
	I0315 07:21:31.200743   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.200753   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:31.200759   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:31.200822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:31.236449   57277 cri.go:89] found id: ""
	I0315 07:21:31.236491   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.236503   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:31.236510   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:31.236566   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:31.274801   57277 cri.go:89] found id: ""
	I0315 07:21:31.274828   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.274839   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:31.274847   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:31.274910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:31.310693   57277 cri.go:89] found id: ""
	I0315 07:21:31.310742   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.310752   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:31.310760   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:31.310816   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:31.347919   57277 cri.go:89] found id: ""
	I0315 07:21:31.347945   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.347955   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:31.347962   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:31.348036   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:31.384579   57277 cri.go:89] found id: ""
	I0315 07:21:31.384616   57277 logs.go:276] 0 containers: []
	W0315 07:21:31.384631   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:31.384642   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:31.384661   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:31.398761   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:31.398786   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:31.470215   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:31.470241   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:31.470257   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:31.551467   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:31.551502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:31.595203   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:31.595240   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:28.527572   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.529404   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:31.096322   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:33.096501   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:30.845944   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:32.846905   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.347251   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:34.150578   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:34.164552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:34.164612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:34.203190   57277 cri.go:89] found id: ""
	I0315 07:21:34.203219   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.203231   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:34.203238   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:34.203327   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:34.241338   57277 cri.go:89] found id: ""
	I0315 07:21:34.241364   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.241372   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:34.241383   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:34.241431   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:34.283021   57277 cri.go:89] found id: ""
	I0315 07:21:34.283049   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.283061   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:34.283069   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:34.283131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:34.320944   57277 cri.go:89] found id: ""
	I0315 07:21:34.320972   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.320984   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:34.320992   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:34.321048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:34.360882   57277 cri.go:89] found id: ""
	I0315 07:21:34.360907   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.360919   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:34.360925   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:34.360985   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:34.398200   57277 cri.go:89] found id: ""
	I0315 07:21:34.398232   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.398244   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:34.398252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:34.398309   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:34.436200   57277 cri.go:89] found id: ""
	I0315 07:21:34.436229   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.436241   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:34.436251   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:34.436313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:34.477394   57277 cri.go:89] found id: ""
	I0315 07:21:34.477424   57277 logs.go:276] 0 containers: []
	W0315 07:21:34.477436   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:34.477453   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:34.477469   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:34.558332   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:34.558363   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:34.603416   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:34.603451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:34.655887   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:34.655921   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:34.671056   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:34.671080   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:34.743123   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.244102   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:37.257733   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:37.257790   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:37.293905   57277 cri.go:89] found id: ""
	I0315 07:21:37.293936   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.293946   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:37.293953   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:37.294013   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:37.331990   57277 cri.go:89] found id: ""
	I0315 07:21:37.332017   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.332027   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:37.332035   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:37.332097   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:37.370661   57277 cri.go:89] found id: ""
	I0315 07:21:37.370684   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.370691   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:37.370697   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:37.370745   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:37.407116   57277 cri.go:89] found id: ""
	I0315 07:21:37.407144   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.407154   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:37.407166   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:37.407226   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:37.445440   57277 cri.go:89] found id: ""
	I0315 07:21:37.445463   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.445471   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:37.445477   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:37.445535   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:37.485511   57277 cri.go:89] found id: ""
	I0315 07:21:37.485538   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.485545   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:37.485553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:37.485608   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:33.027039   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.526499   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:35.596887   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:38.095825   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.846148   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:40.346119   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:37.527277   57277 cri.go:89] found id: ""
	I0315 07:21:37.527306   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.527317   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:37.527326   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:37.527387   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:37.564511   57277 cri.go:89] found id: ""
	I0315 07:21:37.564544   57277 logs.go:276] 0 containers: []
	W0315 07:21:37.564555   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:37.564570   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:37.564585   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:37.610919   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:37.610954   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:37.668738   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:37.668777   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:37.684795   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:37.684839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:37.759109   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:37.759140   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:37.759155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.341222   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:40.357423   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:40.357504   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:40.398672   57277 cri.go:89] found id: ""
	I0315 07:21:40.398695   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.398703   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:40.398710   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:40.398757   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:40.437565   57277 cri.go:89] found id: ""
	I0315 07:21:40.437592   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.437604   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:40.437612   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:40.437678   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:40.477393   57277 cri.go:89] found id: ""
	I0315 07:21:40.477414   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.477422   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:40.477431   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:40.477490   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:40.519590   57277 cri.go:89] found id: ""
	I0315 07:21:40.519618   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.519626   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:40.519632   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:40.519694   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:40.561696   57277 cri.go:89] found id: ""
	I0315 07:21:40.561735   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.561747   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:40.561764   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:40.561834   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:40.601244   57277 cri.go:89] found id: ""
	I0315 07:21:40.601272   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.601281   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:40.601290   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:40.601350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:40.642369   57277 cri.go:89] found id: ""
	I0315 07:21:40.642396   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.642407   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:40.642415   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:40.642477   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:40.680761   57277 cri.go:89] found id: ""
	I0315 07:21:40.680801   57277 logs.go:276] 0 containers: []
	W0315 07:21:40.680813   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:40.680824   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:40.680839   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:40.741647   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:40.741682   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:40.757656   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:40.757685   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:40.835642   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:40.835667   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:40.835680   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:40.915181   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:40.915216   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:40.027208   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.027580   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.596334   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:45.097012   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:42.347169   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:44.845956   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:43.462192   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:43.475959   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:43.476030   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:43.511908   57277 cri.go:89] found id: ""
	I0315 07:21:43.511931   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.511938   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:43.511948   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:43.512014   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:43.547625   57277 cri.go:89] found id: ""
	I0315 07:21:43.547655   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.547666   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:43.547674   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:43.547739   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:43.586112   57277 cri.go:89] found id: ""
	I0315 07:21:43.586139   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.586148   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:43.586165   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:43.586229   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:43.621684   57277 cri.go:89] found id: ""
	I0315 07:21:43.621714   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.621722   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:43.621728   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:43.621781   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:43.658567   57277 cri.go:89] found id: ""
	I0315 07:21:43.658588   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.658598   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:43.658605   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:43.658664   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:43.697543   57277 cri.go:89] found id: ""
	I0315 07:21:43.697571   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.697582   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:43.697591   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:43.697654   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:43.739020   57277 cri.go:89] found id: ""
	I0315 07:21:43.739043   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.739050   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:43.739056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:43.739113   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:43.777061   57277 cri.go:89] found id: ""
	I0315 07:21:43.777086   57277 logs.go:276] 0 containers: []
	W0315 07:21:43.777097   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:43.777109   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:43.777124   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:43.827683   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:43.827717   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:43.843310   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:43.843343   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:43.932561   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:43.932583   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:43.932595   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:44.013336   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:44.013369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:46.559270   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:46.574804   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:46.574883   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:46.619556   57277 cri.go:89] found id: ""
	I0315 07:21:46.619591   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.619604   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:46.619610   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:46.619680   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:46.659418   57277 cri.go:89] found id: ""
	I0315 07:21:46.659446   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.659454   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:46.659461   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:46.659506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:46.694970   57277 cri.go:89] found id: ""
	I0315 07:21:46.694998   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.695007   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:46.695014   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:46.695067   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:46.746213   57277 cri.go:89] found id: ""
	I0315 07:21:46.746245   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.746257   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:46.746264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:46.746324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:46.804828   57277 cri.go:89] found id: ""
	I0315 07:21:46.804850   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.804857   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:46.804863   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:46.804925   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:46.841454   57277 cri.go:89] found id: ""
	I0315 07:21:46.841482   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.841493   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:46.841503   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:46.841573   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:46.879003   57277 cri.go:89] found id: ""
	I0315 07:21:46.879028   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.879035   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:46.879041   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:46.879099   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:46.916183   57277 cri.go:89] found id: ""
	I0315 07:21:46.916205   57277 logs.go:276] 0 containers: []
	W0315 07:21:46.916213   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:46.916222   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:46.916232   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:47.001798   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:47.001833   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:47.043043   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:47.043076   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:47.095646   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:47.095700   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:47.110664   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:47.110701   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:47.183132   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:44.527172   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.528091   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:47.596841   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:50.095860   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:46.846773   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:48.848890   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:49.684084   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:49.699142   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:49.699223   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:49.738025   57277 cri.go:89] found id: ""
	I0315 07:21:49.738058   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.738076   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:49.738088   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:49.738148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:49.779065   57277 cri.go:89] found id: ""
	I0315 07:21:49.779087   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.779095   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:49.779100   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:49.779150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:49.819154   57277 cri.go:89] found id: ""
	I0315 07:21:49.819185   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.819196   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:49.819204   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:49.819271   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:49.857585   57277 cri.go:89] found id: ""
	I0315 07:21:49.857610   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.857619   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:49.857625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:49.857671   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:49.895434   57277 cri.go:89] found id: ""
	I0315 07:21:49.895459   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.895469   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:49.895475   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:49.895526   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:49.935507   57277 cri.go:89] found id: ""
	I0315 07:21:49.935535   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.935542   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:49.935548   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:49.935616   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:49.980268   57277 cri.go:89] found id: ""
	I0315 07:21:49.980299   57277 logs.go:276] 0 containers: []
	W0315 07:21:49.980310   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:49.980317   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:49.980380   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:50.018763   57277 cri.go:89] found id: ""
	I0315 07:21:50.018792   57277 logs.go:276] 0 containers: []
	W0315 07:21:50.018803   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:50.018814   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:50.018828   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:50.060903   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:50.060929   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:50.111295   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:50.111325   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:50.125154   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:50.125181   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:50.207928   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:50.207956   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:50.207971   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:49.026360   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.026731   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.096398   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:54.096753   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:51.348247   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:53.847028   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:52.794425   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:52.809795   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:52.809858   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:52.848985   57277 cri.go:89] found id: ""
	I0315 07:21:52.849012   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.849022   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:52.849030   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:52.849084   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:52.888409   57277 cri.go:89] found id: ""
	I0315 07:21:52.888434   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.888442   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:52.888448   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:52.888509   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:52.927948   57277 cri.go:89] found id: ""
	I0315 07:21:52.927969   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.927976   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:52.927982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:52.928034   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:52.965080   57277 cri.go:89] found id: ""
	I0315 07:21:52.965110   57277 logs.go:276] 0 containers: []
	W0315 07:21:52.965121   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:52.965129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:52.965183   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:53.004737   57277 cri.go:89] found id: ""
	I0315 07:21:53.004759   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.004767   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:53.004773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:53.004822   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:53.043546   57277 cri.go:89] found id: ""
	I0315 07:21:53.043580   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.043591   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:53.043599   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:53.043656   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:53.079299   57277 cri.go:89] found id: ""
	I0315 07:21:53.079325   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.079333   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:53.079339   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:53.079397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:53.116506   57277 cri.go:89] found id: ""
	I0315 07:21:53.116531   57277 logs.go:276] 0 containers: []
	W0315 07:21:53.116539   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:53.116547   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:53.116564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.159822   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:53.159846   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:53.214637   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:53.214671   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:53.231870   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:53.231902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:53.310706   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:53.310724   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:53.310738   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:55.898081   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:55.913252   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:55.913324   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:55.956719   57277 cri.go:89] found id: ""
	I0315 07:21:55.956741   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.956750   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:55.956757   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:55.956819   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:55.993862   57277 cri.go:89] found id: ""
	I0315 07:21:55.993891   57277 logs.go:276] 0 containers: []
	W0315 07:21:55.993903   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:55.993911   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:55.993970   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:56.030017   57277 cri.go:89] found id: ""
	I0315 07:21:56.030040   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.030051   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:56.030059   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:56.030118   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:56.065488   57277 cri.go:89] found id: ""
	I0315 07:21:56.065517   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.065527   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:56.065535   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:56.065593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:56.106633   57277 cri.go:89] found id: ""
	I0315 07:21:56.106657   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.106667   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:56.106674   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:56.106732   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:56.142951   57277 cri.go:89] found id: ""
	I0315 07:21:56.142976   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.142984   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:56.142990   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:56.143049   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:56.179418   57277 cri.go:89] found id: ""
	I0315 07:21:56.179458   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.179470   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:56.179479   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:56.179545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:56.217220   57277 cri.go:89] found id: ""
	I0315 07:21:56.217249   57277 logs.go:276] 0 containers: []
	W0315 07:21:56.217260   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:56.217271   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:56.217286   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:56.267852   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:56.267885   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:56.281678   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:56.281708   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:56.359462   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:21:56.359486   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:56.359501   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:56.440119   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:56.440157   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:53.026758   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:55.027300   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:57.527408   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.097890   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.595295   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:56.348176   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.847276   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:21:58.984494   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:21:59.000115   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:21:59.000185   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:21:59.042069   57277 cri.go:89] found id: ""
	I0315 07:21:59.042091   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.042099   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:21:59.042105   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:21:59.042150   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:21:59.080764   57277 cri.go:89] found id: ""
	I0315 07:21:59.080787   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.080795   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:21:59.080800   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:21:59.080852   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:21:59.119130   57277 cri.go:89] found id: ""
	I0315 07:21:59.119153   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.119162   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:21:59.119167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:21:59.119217   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:21:59.173956   57277 cri.go:89] found id: ""
	I0315 07:21:59.173986   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.173994   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:21:59.174000   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:21:59.174058   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:21:59.239554   57277 cri.go:89] found id: ""
	I0315 07:21:59.239582   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.239593   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:21:59.239600   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:21:59.239658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:21:59.290345   57277 cri.go:89] found id: ""
	I0315 07:21:59.290370   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.290376   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:21:59.290382   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:21:59.290438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:21:59.330088   57277 cri.go:89] found id: ""
	I0315 07:21:59.330115   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.330123   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:21:59.330129   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:21:59.330181   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:21:59.371200   57277 cri.go:89] found id: ""
	I0315 07:21:59.371224   57277 logs.go:276] 0 containers: []
	W0315 07:21:59.371232   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:21:59.371240   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:21:59.371252   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:21:59.451948   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:21:59.452004   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:21:59.495934   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:21:59.495963   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:21:59.551528   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:21:59.551564   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:21:59.567357   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:21:59.567387   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:21:59.647583   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.148157   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:02.162432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:02.162495   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:02.199616   57277 cri.go:89] found id: ""
	I0315 07:22:02.199644   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.199653   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:02.199659   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:02.199706   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:02.237693   57277 cri.go:89] found id: ""
	I0315 07:22:02.237721   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.237732   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:02.237742   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:02.237806   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:02.279770   57277 cri.go:89] found id: ""
	I0315 07:22:02.279799   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.279810   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:02.279819   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:02.279880   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:02.318293   57277 cri.go:89] found id: ""
	I0315 07:22:02.318317   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.318325   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:02.318331   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:02.318377   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:02.357387   57277 cri.go:89] found id: ""
	I0315 07:22:02.357410   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.357420   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:02.357427   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:02.357487   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:02.395339   57277 cri.go:89] found id: ""
	I0315 07:22:02.395373   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.395386   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:02.395394   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:02.395452   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:02.431327   57277 cri.go:89] found id: ""
	I0315 07:22:02.431350   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.431357   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:02.431362   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:02.431409   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:02.471701   57277 cri.go:89] found id: ""
	I0315 07:22:02.471728   57277 logs.go:276] 0 containers: []
	W0315 07:22:02.471739   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:02.471751   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:02.471766   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:00.026883   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.527996   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:00.596242   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.598494   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:05.096070   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:01.347519   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:03.847213   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:02.532839   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:02.532866   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:02.547497   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:02.547524   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:02.623179   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:02.623202   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:02.623214   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:02.698932   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:02.698968   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.245638   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:05.259583   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:05.259658   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:05.298507   57277 cri.go:89] found id: ""
	I0315 07:22:05.298532   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.298540   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:05.298545   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:05.298595   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:05.338665   57277 cri.go:89] found id: ""
	I0315 07:22:05.338697   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.338707   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:05.338714   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:05.338776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:05.377573   57277 cri.go:89] found id: ""
	I0315 07:22:05.377606   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.377618   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:05.377625   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:05.377698   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:05.417545   57277 cri.go:89] found id: ""
	I0315 07:22:05.417605   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.417627   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:05.417635   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:05.417702   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:05.459828   57277 cri.go:89] found id: ""
	I0315 07:22:05.459858   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.459869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:05.459878   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:05.459946   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:05.500108   57277 cri.go:89] found id: ""
	I0315 07:22:05.500137   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.500146   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:05.500152   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:05.500215   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:05.540606   57277 cri.go:89] found id: ""
	I0315 07:22:05.540634   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.540640   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:05.540651   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:05.540697   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:05.582958   57277 cri.go:89] found id: ""
	I0315 07:22:05.582987   57277 logs.go:276] 0 containers: []
	W0315 07:22:05.582999   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:05.583009   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:05.583023   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:05.639008   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:05.639044   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:05.653964   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:05.654000   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:05.739647   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:05.739680   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:05.739697   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:05.818634   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:05.818668   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:05.026684   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.527407   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:07.596614   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:09.597201   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:06.345982   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.847283   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:08.363650   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:08.380366   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:08.380438   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:08.424383   57277 cri.go:89] found id: ""
	I0315 07:22:08.424408   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.424416   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:08.424422   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:08.424498   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:08.470604   57277 cri.go:89] found id: ""
	I0315 07:22:08.470631   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.470639   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:08.470645   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:08.470693   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:08.513510   57277 cri.go:89] found id: ""
	I0315 07:22:08.513554   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.513566   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:08.513574   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:08.513663   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:08.552802   57277 cri.go:89] found id: ""
	I0315 07:22:08.552833   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.552843   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:08.552851   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:08.552904   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:08.591504   57277 cri.go:89] found id: ""
	I0315 07:22:08.591534   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.591545   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:08.591552   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:08.591612   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:08.631975   57277 cri.go:89] found id: ""
	I0315 07:22:08.632002   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.632010   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:08.632016   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:08.632061   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:08.673204   57277 cri.go:89] found id: ""
	I0315 07:22:08.673230   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.673238   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:08.673244   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:08.673305   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:08.717623   57277 cri.go:89] found id: ""
	I0315 07:22:08.717650   57277 logs.go:276] 0 containers: []
	W0315 07:22:08.717662   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:08.717673   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:08.717690   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:08.757581   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:08.757615   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:08.812050   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:08.812086   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:08.826932   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:08.826959   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:08.905953   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:08.905977   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:08.905992   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:11.486907   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:11.503056   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:11.503131   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:11.544796   57277 cri.go:89] found id: ""
	I0315 07:22:11.544824   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.544834   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:11.544841   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:11.544907   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:11.588121   57277 cri.go:89] found id: ""
	I0315 07:22:11.588158   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.588172   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:11.588180   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:11.588237   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:11.629661   57277 cri.go:89] found id: ""
	I0315 07:22:11.629689   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.629698   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:11.629705   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:11.629764   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:11.673492   57277 cri.go:89] found id: ""
	I0315 07:22:11.673532   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.673547   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:11.673553   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:11.673619   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:11.716644   57277 cri.go:89] found id: ""
	I0315 07:22:11.716679   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.716690   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:11.716698   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:11.716765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:11.756291   57277 cri.go:89] found id: ""
	I0315 07:22:11.756320   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.756330   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:11.756337   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:11.756397   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:11.797702   57277 cri.go:89] found id: ""
	I0315 07:22:11.797729   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.797738   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:11.797743   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:11.797808   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:11.836269   57277 cri.go:89] found id: ""
	I0315 07:22:11.836292   57277 logs.go:276] 0 containers: []
	W0315 07:22:11.836300   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:11.836308   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:11.836320   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:11.888848   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:11.888881   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:11.902971   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:11.902996   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:11.973325   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:11.973345   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:11.973358   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:12.053726   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:12.053760   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:09.527437   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:12.028076   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.597381   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.097373   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:11.347638   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:13.845693   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:14.601515   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:14.616112   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:14.616178   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:14.656681   57277 cri.go:89] found id: ""
	I0315 07:22:14.656710   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.656718   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:14.656724   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:14.656777   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:14.698172   57277 cri.go:89] found id: ""
	I0315 07:22:14.698206   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.698218   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:14.698226   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:14.698281   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:14.739747   57277 cri.go:89] found id: ""
	I0315 07:22:14.739775   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.739786   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:14.739798   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:14.739868   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:14.778225   57277 cri.go:89] found id: ""
	I0315 07:22:14.778251   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.778258   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:14.778264   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:14.778313   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:14.816817   57277 cri.go:89] found id: ""
	I0315 07:22:14.816845   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.816853   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:14.816859   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:14.816909   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:14.856205   57277 cri.go:89] found id: ""
	I0315 07:22:14.856232   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.856243   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:14.856250   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:14.856307   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:14.896677   57277 cri.go:89] found id: ""
	I0315 07:22:14.896705   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.896715   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:14.896721   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:14.896779   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:14.934433   57277 cri.go:89] found id: ""
	I0315 07:22:14.934464   57277 logs.go:276] 0 containers: []
	W0315 07:22:14.934475   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:14.934487   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:14.934502   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:15.016499   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:15.016539   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:15.062780   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:15.062873   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:15.119599   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:15.119633   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:15.136241   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:15.136282   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:15.213521   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:14.527163   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.529323   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:16.596170   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:18.597030   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:15.845922   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.847723   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.347150   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:17.714637   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:17.728970   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:17.729033   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:17.768454   57277 cri.go:89] found id: ""
	I0315 07:22:17.768505   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.768513   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:17.768519   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:17.768598   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:17.809282   57277 cri.go:89] found id: ""
	I0315 07:22:17.809316   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.809329   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:17.809338   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:17.809401   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:17.850503   57277 cri.go:89] found id: ""
	I0315 07:22:17.850527   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.850534   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:17.850540   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:17.850593   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:17.889376   57277 cri.go:89] found id: ""
	I0315 07:22:17.889419   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.889426   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:17.889432   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:17.889491   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:17.926935   57277 cri.go:89] found id: ""
	I0315 07:22:17.926965   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.926975   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:17.926982   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:17.927048   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:17.968504   57277 cri.go:89] found id: ""
	I0315 07:22:17.968534   57277 logs.go:276] 0 containers: []
	W0315 07:22:17.968550   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:17.968558   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:17.968617   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:18.009730   57277 cri.go:89] found id: ""
	I0315 07:22:18.009756   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.009766   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:18.009773   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:18.009835   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:18.048882   57277 cri.go:89] found id: ""
	I0315 07:22:18.048910   57277 logs.go:276] 0 containers: []
	W0315 07:22:18.048918   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:18.048928   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:18.048939   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:18.104438   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:18.104495   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:18.120376   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:18.120405   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:18.195170   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:18.195190   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:18.195206   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:18.271415   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:18.271451   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:20.817082   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:20.831393   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:20.831473   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:20.868802   57277 cri.go:89] found id: ""
	I0315 07:22:20.868827   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.868839   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:20.868846   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:20.868910   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:20.906410   57277 cri.go:89] found id: ""
	I0315 07:22:20.906438   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.906448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:20.906458   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:20.906523   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:20.945335   57277 cri.go:89] found id: ""
	I0315 07:22:20.945369   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.945380   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:20.945387   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:20.945449   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:20.983281   57277 cri.go:89] found id: ""
	I0315 07:22:20.983307   57277 logs.go:276] 0 containers: []
	W0315 07:22:20.983318   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:20.983330   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:20.983382   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:21.021041   57277 cri.go:89] found id: ""
	I0315 07:22:21.021064   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.021071   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:21.021077   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:21.021120   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:21.062759   57277 cri.go:89] found id: ""
	I0315 07:22:21.062780   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.062787   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:21.062793   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:21.062837   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:21.101700   57277 cri.go:89] found id: ""
	I0315 07:22:21.101722   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.101729   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:21.101734   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:21.101785   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:21.141919   57277 cri.go:89] found id: ""
	I0315 07:22:21.141952   57277 logs.go:276] 0 containers: []
	W0315 07:22:21.141963   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:21.141974   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:21.141989   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:21.217699   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:21.217735   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:21.262033   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:21.262075   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:21.317132   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:21.317168   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:21.332802   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:21.332830   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:21.412412   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:19.027373   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:21.027538   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:20.597329   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.097833   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:22.846403   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.345226   56818 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:23.912560   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:23.928004   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:23.928077   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:23.967039   57277 cri.go:89] found id: ""
	I0315 07:22:23.967067   57277 logs.go:276] 0 containers: []
	W0315 07:22:23.967079   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:23.967087   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:23.967148   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:24.006831   57277 cri.go:89] found id: ""
	I0315 07:22:24.006865   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.006873   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:24.006882   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:24.006941   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:24.046439   57277 cri.go:89] found id: ""
	I0315 07:22:24.046462   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.046470   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:24.046476   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:24.046522   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:24.090882   57277 cri.go:89] found id: ""
	I0315 07:22:24.090908   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.090918   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:24.090926   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:24.090989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:24.131063   57277 cri.go:89] found id: ""
	I0315 07:22:24.131087   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.131096   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:24.131102   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:24.131177   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:24.172098   57277 cri.go:89] found id: ""
	I0315 07:22:24.172124   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.172136   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:24.172143   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:24.172227   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:24.211170   57277 cri.go:89] found id: ""
	I0315 07:22:24.211197   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.211208   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:24.211216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:24.211273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:24.252312   57277 cri.go:89] found id: ""
	I0315 07:22:24.252342   57277 logs.go:276] 0 containers: []
	W0315 07:22:24.252353   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:24.252365   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:24.252385   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:24.295927   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:24.295958   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:24.352427   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:24.352481   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:24.368843   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:24.368874   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:24.449415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:24.449438   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:24.449453   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.035243   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:27.050559   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:27.050641   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:27.092228   57277 cri.go:89] found id: ""
	I0315 07:22:27.092258   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.092268   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:27.092276   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:27.092339   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:27.134954   57277 cri.go:89] found id: ""
	I0315 07:22:27.134986   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.134998   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:27.135007   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:27.135066   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:27.174887   57277 cri.go:89] found id: ""
	I0315 07:22:27.174916   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.174927   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:27.174935   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:27.175006   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:27.219163   57277 cri.go:89] found id: ""
	I0315 07:22:27.219201   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.219217   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:27.219225   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:27.219304   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:27.261259   57277 cri.go:89] found id: ""
	I0315 07:22:27.261283   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.261294   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:27.261301   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:27.261375   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:27.305668   57277 cri.go:89] found id: ""
	I0315 07:22:27.305696   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.305707   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:27.305715   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:27.305780   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:27.344118   57277 cri.go:89] found id: ""
	I0315 07:22:27.344148   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.344159   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:27.344167   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:27.344225   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:27.389344   57277 cri.go:89] found id: ""
	I0315 07:22:27.389374   57277 logs.go:276] 0 containers: []
	W0315 07:22:27.389384   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:27.389396   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:27.389413   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:27.446803   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:27.446843   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:27.464144   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:27.464178   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:22:23.527221   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.025643   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:25.597795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:27.598010   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:26.339498   56818 pod_ready.go:81] duration metric: took 4m0.000865977s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" ...
	E0315 07:22:26.339535   56818 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bhbwz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:22:26.339552   56818 pod_ready.go:38] duration metric: took 4m10.073427337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:22:26.339590   56818 kubeadm.go:591] duration metric: took 4m17.408460294s to restartPrimaryControlPlane
	W0315 07:22:26.339647   56818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:26.339673   56818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0315 07:22:27.570849   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:27.570879   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:27.570896   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:27.650364   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:27.650401   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:30.200963   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:30.220584   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:30.220662   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:30.273246   57277 cri.go:89] found id: ""
	I0315 07:22:30.273273   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.273283   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:30.273291   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:30.273350   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:30.316347   57277 cri.go:89] found id: ""
	I0315 07:22:30.316427   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.316452   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:30.316481   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:30.316545   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:30.359348   57277 cri.go:89] found id: ""
	I0315 07:22:30.359378   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.359390   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:30.359397   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:30.359482   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:30.403049   57277 cri.go:89] found id: ""
	I0315 07:22:30.403086   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.403099   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:30.403128   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:30.403228   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:30.441833   57277 cri.go:89] found id: ""
	I0315 07:22:30.441860   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.441869   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:30.441877   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:30.441942   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:30.488175   57277 cri.go:89] found id: ""
	I0315 07:22:30.488202   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.488210   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:30.488216   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:30.488273   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:30.535668   57277 cri.go:89] found id: ""
	I0315 07:22:30.535693   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.535700   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:30.535707   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:30.535765   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:30.579389   57277 cri.go:89] found id: ""
	I0315 07:22:30.579419   57277 logs.go:276] 0 containers: []
	W0315 07:22:30.579429   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:30.579441   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:30.579456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:30.639868   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:30.639902   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:30.661328   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:30.661369   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:30.751415   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:30.751440   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:30.751456   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:30.857293   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:30.857328   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:28.026233   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:30.027353   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.027867   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:32.596795   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:35.096005   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:33.404090   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:33.419277   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:22:33.419345   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:22:33.457665   57277 cri.go:89] found id: ""
	I0315 07:22:33.457687   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.457695   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:22:33.457701   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:22:33.457746   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:22:33.500415   57277 cri.go:89] found id: ""
	I0315 07:22:33.500439   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.500448   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:22:33.500454   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:22:33.500537   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:22:33.539428   57277 cri.go:89] found id: ""
	I0315 07:22:33.539456   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.539481   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:22:33.539488   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:22:33.539549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:22:33.582368   57277 cri.go:89] found id: ""
	I0315 07:22:33.582398   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.582410   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:22:33.582418   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:22:33.582479   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:22:33.623742   57277 cri.go:89] found id: ""
	I0315 07:22:33.623771   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.623782   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:22:33.623790   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:22:33.623849   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:22:33.664970   57277 cri.go:89] found id: ""
	I0315 07:22:33.665001   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.665012   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:22:33.665020   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:22:33.665078   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:22:33.706451   57277 cri.go:89] found id: ""
	I0315 07:22:33.706483   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.706493   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:22:33.706502   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:22:33.706560   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:22:33.744807   57277 cri.go:89] found id: ""
	I0315 07:22:33.744831   57277 logs.go:276] 0 containers: []
	W0315 07:22:33.744838   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:22:33.744845   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:22:33.744858   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:22:33.797559   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:22:33.797594   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:22:33.814118   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:22:33.814155   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:22:33.896587   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0315 07:22:33.896621   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:22:33.896634   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:22:33.987757   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:22:33.987795   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:22:36.537607   57277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:22:36.552332   57277 kubeadm.go:591] duration metric: took 4m2.711110116s to restartPrimaryControlPlane
	W0315 07:22:36.552409   57277 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:22:36.552430   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:22:34.028348   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:36.527498   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:37.596261   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.597564   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:39.058429   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.505978486s)
	I0315 07:22:39.058505   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:39.074050   57277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:39.085522   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:39.097226   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:39.097247   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:39.097302   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:22:39.107404   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:39.107463   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:39.118346   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:22:39.130439   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:39.130504   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:39.143223   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.154709   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:39.154761   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:39.166173   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:22:39.177265   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:39.177329   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:39.188646   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:39.426757   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:22:39.026604   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:41.026814   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:42.100622   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:44.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:43.526575   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:45.527611   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.527964   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:47.097312   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:49.600150   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:50.027162   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.527342   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:52.095985   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:54.096692   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:55.026644   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:57.026768   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.864240   56818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.524546701s)
	I0315 07:22:58.864316   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:22:58.881918   56818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:22:58.894334   56818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:22:58.906597   56818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:22:58.906621   56818 kubeadm.go:156] found existing configuration files:
	
	I0315 07:22:58.906664   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0315 07:22:58.919069   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:22:58.919138   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:22:58.931501   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0315 07:22:58.943572   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:22:58.943625   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:22:58.955871   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.966059   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:22:58.966130   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:22:58.976452   56818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0315 07:22:58.986351   56818 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:22:58.986401   56818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:22:58.996629   56818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:22:59.056211   56818 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:22:59.056278   56818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:22:59.221368   56818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:22:59.221526   56818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:22:59.221667   56818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:22:59.456334   56818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:22:59.459054   56818 out.go:204]   - Generating certificates and keys ...
	I0315 07:22:59.459166   56818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:22:59.459263   56818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:22:59.459337   56818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:22:59.459418   56818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:22:59.459491   56818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:22:59.459547   56818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:22:59.459652   56818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:22:59.460321   56818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:22:59.460848   56818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:22:59.461344   56818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:22:59.461686   56818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:22:59.461773   56818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:22:59.622989   56818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:22:59.735032   56818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:22:59.783386   56818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:23:00.050901   56818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:23:00.051589   56818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:23:00.056639   56818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:22:56.097513   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:22:58.596689   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:00.058517   56818 out.go:204]   - Booting up control plane ...
	I0315 07:23:00.058624   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:23:00.058695   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:23:00.058757   56818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:23:00.078658   56818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:23:00.079134   56818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:23:00.079199   56818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:23:00.221762   56818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:22:59.527111   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.528557   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:01.095544   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:03.096837   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.230611   56818 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.008044 seconds
	I0315 07:23:06.230759   56818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:23:06.251874   56818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:23:06.794394   56818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:23:06.794663   56818 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-128870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:23:07.310603   56818 kubeadm.go:309] [bootstrap-token] Using token: 2udvno.3q8e4ar4wutd2228
	I0315 07:23:07.312493   56818 out.go:204]   - Configuring RBAC rules ...
	I0315 07:23:07.312631   56818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:23:07.327365   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:23:07.336397   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:23:07.344585   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:23:07.349398   56818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:23:07.353302   56818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:23:07.371837   56818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:23:07.636048   56818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:23:07.735706   56818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:23:07.736830   56818 kubeadm.go:309] 
	I0315 07:23:07.736932   56818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:23:07.736951   56818 kubeadm.go:309] 
	I0315 07:23:07.737008   56818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:23:07.737031   56818 kubeadm.go:309] 
	I0315 07:23:07.737084   56818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:23:07.737144   56818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:23:07.737256   56818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:23:07.737273   56818 kubeadm.go:309] 
	I0315 07:23:07.737322   56818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:23:07.737347   56818 kubeadm.go:309] 
	I0315 07:23:07.737415   56818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:23:07.737426   56818 kubeadm.go:309] 
	I0315 07:23:07.737505   56818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:23:07.737609   56818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:23:07.737704   56818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:23:07.737719   56818 kubeadm.go:309] 
	I0315 07:23:07.737813   56818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:23:07.737928   56818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:23:07.737946   56818 kubeadm.go:309] 
	I0315 07:23:07.738066   56818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738201   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:23:07.738232   56818 kubeadm.go:309] 	--control-plane 
	I0315 07:23:07.738237   56818 kubeadm.go:309] 
	I0315 07:23:07.738370   56818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:23:07.738382   56818 kubeadm.go:309] 
	I0315 07:23:07.738498   56818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 2udvno.3q8e4ar4wutd2228 \
	I0315 07:23:07.738648   56818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:23:07.739063   56818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:23:07.739090   56818 cni.go:84] Creating CNI manager for ""
	I0315 07:23:07.739099   56818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:23:07.741268   56818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:23:04.027926   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:06.526265   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:05.597740   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:08.097253   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:07.742608   56818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:23:07.781187   56818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:23:07.810957   56818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:23:07.811048   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:07.811086   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-128870 minikube.k8s.io/updated_at=2024_03_15T07_23_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=default-k8s-diff-port-128870 minikube.k8s.io/primary=true
	I0315 07:23:08.168436   56818 ops.go:34] apiserver oom_adj: -16
	I0315 07:23:08.168584   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:08.669432   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.169106   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.668654   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.169657   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:10.669592   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:09.028586   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.527616   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:10.598053   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:13.096254   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:15.098002   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:11.169138   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:11.669379   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.169522   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:12.668865   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.168709   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.668674   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.168940   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:14.669371   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.169203   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:15.668688   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:13.528394   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.027157   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:16.169447   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:16.669360   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.169364   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:17.669322   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.168628   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:18.668633   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.168616   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:19.669572   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.168625   56818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:23:20.277497   56818 kubeadm.go:1107] duration metric: took 12.466506945s to wait for elevateKubeSystemPrivileges
	W0315 07:23:20.277538   56818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:23:20.277548   56818 kubeadm.go:393] duration metric: took 5m11.398710975s to StartCluster
	I0315 07:23:20.277568   56818 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.277656   56818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:23:20.279942   56818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:23:20.280232   56818 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:23:20.282386   56818 out.go:177] * Verifying Kubernetes components...
	I0315 07:23:20.280274   56818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:23:20.280438   56818 config.go:182] Loaded profile config "default-k8s-diff-port-128870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:23:20.283833   56818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:23:20.283846   56818 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283857   56818 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283889   56818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-128870"
	I0315 07:23:20.283892   56818 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283909   56818 addons.go:243] addon metrics-server should already be in state true
	I0315 07:23:20.283836   56818 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-128870"
	I0315 07:23:20.283951   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.283952   56818 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.283986   56818 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:23:20.284050   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.284312   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284340   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284339   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284360   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.284377   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.284399   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.302835   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0315 07:23:20.303324   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.303918   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.303939   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.304305   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.304499   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.304519   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0315 07:23:20.304565   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0315 07:23:20.304865   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305022   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.305421   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305445   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305534   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.305558   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.305831   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306394   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.306511   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.306542   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308237   56818 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-128870"
	W0315 07:23:20.308258   56818 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:23:20.308286   56818 host.go:66] Checking if "default-k8s-diff-port-128870" exists ...
	I0315 07:23:20.308651   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308677   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.308956   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.308983   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.323456   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0315 07:23:20.323627   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0315 07:23:20.324448   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324586   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.324957   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.324978   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325121   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.325135   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.325392   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325446   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.325544   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.325859   56818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:23:20.325885   56818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:23:20.326048   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0315 07:23:20.326655   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.327296   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.327429   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.327439   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.329949   56818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:23:20.327771   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.331288   56818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.331307   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:23:20.331309   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.331329   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.333066   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.334543   56818 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:23:17.098139   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:19.596445   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.334567   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335844   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:23:20.335851   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.335857   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:23:20.335876   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.335874   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.335167   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.336121   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.336292   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.336500   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.338503   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339046   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.339074   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.339209   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.339344   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.339484   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.339597   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.350087   56818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0315 07:23:20.350535   56818 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:23:20.350958   56818 main.go:141] libmachine: Using API Version  1
	I0315 07:23:20.350972   56818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:23:20.351309   56818 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:23:20.351523   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetState
	I0315 07:23:20.353272   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .DriverName
	I0315 07:23:20.353519   56818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.353536   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:23:20.353553   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHHostname
	I0315 07:23:20.356173   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356614   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:8d:7d", ip: ""} in network mk-default-k8s-diff-port-128870: {Iface:virbr2 ExpiryTime:2024-03-15 08:17:54 +0000 UTC Type:0 Mac:52:54:00:df:8d:7d Iaid: IPaddr:192.168.50.123 Prefix:24 Hostname:default-k8s-diff-port-128870 Clientid:01:52:54:00:df:8d:7d}
	I0315 07:23:20.356645   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | domain default-k8s-diff-port-128870 has defined IP address 192.168.50.123 and MAC address 52:54:00:df:8d:7d in network mk-default-k8s-diff-port-128870
	I0315 07:23:20.356860   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHPort
	I0315 07:23:20.357049   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHKeyPath
	I0315 07:23:20.357180   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .GetSSHUsername
	I0315 07:23:20.357293   56818 sshutil.go:53] new ssh client: &{IP:192.168.50.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/default-k8s-diff-port-128870/id_rsa Username:docker}
	I0315 07:23:20.485360   56818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:23:20.506356   56818 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516096   56818 node_ready.go:49] node "default-k8s-diff-port-128870" has status "Ready":"True"
	I0315 07:23:20.516116   56818 node_ready.go:38] duration metric: took 9.728555ms for node "default-k8s-diff-port-128870" to be "Ready" ...
	I0315 07:23:20.516125   56818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:20.522244   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:20.598743   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:23:20.635350   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:23:20.664265   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:23:20.664284   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:23:20.719474   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:23:20.719497   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:23:20.807316   56818 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:20.807341   56818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:23:20.831891   56818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:23:22.562662   56818 pod_ready.go:102] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.596973   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.998174664s)
	I0315 07:23:22.597027   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597041   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.596988   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961604553s)
	I0315 07:23:22.597077   56818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.765153117s)
	I0315 07:23:22.597091   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597147   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597123   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597222   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597448   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597471   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597480   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597488   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597566   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597581   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597593   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597598   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597607   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597609   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597615   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.597627   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.597621   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597818   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597846   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597888   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.597902   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.597911   56818 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-128870"
	I0315 07:23:22.597913   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.597889   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) DBG | Closing plugin on server side
	I0315 07:23:22.598017   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.598027   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.665901   56818 main.go:141] libmachine: Making call to close driver server
	I0315 07:23:22.665930   56818 main.go:141] libmachine: (default-k8s-diff-port-128870) Calling .Close
	I0315 07:23:22.666220   56818 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:23:22.666239   56818 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:23:22.669045   56818 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0315 07:23:18.028052   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:20.527599   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.528317   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.096260   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:24.097037   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:22.670410   56818 addons.go:505] duration metric: took 2.390136718s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0315 07:23:24.530397   56818 pod_ready.go:92] pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.530417   56818 pod_ready.go:81] duration metric: took 4.008147047s for pod "coredns-5dd5756b68-4g87j" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.530426   56818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536799   56818 pod_ready.go:92] pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.536818   56818 pod_ready.go:81] duration metric: took 6.386445ms for pod "coredns-5dd5756b68-5gtx2" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.536826   56818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541830   56818 pod_ready.go:92] pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.541849   56818 pod_ready.go:81] duration metric: took 5.017255ms for pod "etcd-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.541859   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550303   56818 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.550322   56818 pod_ready.go:81] duration metric: took 8.457613ms for pod "kube-apiserver-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.550331   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555212   56818 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.555232   56818 pod_ready.go:81] duration metric: took 4.893889ms for pod "kube-controller-manager-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.555243   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927920   56818 pod_ready.go:92] pod "kube-proxy-97bfn" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:24.927942   56818 pod_ready.go:81] duration metric: took 372.692882ms for pod "kube-proxy-97bfn" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:24.927952   56818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327576   56818 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace has status "Ready":"True"
	I0315 07:23:25.327606   56818 pod_ready.go:81] duration metric: took 399.646811ms for pod "kube-scheduler-default-k8s-diff-port-128870" in "kube-system" namespace to be "Ready" ...
	I0315 07:23:25.327618   56818 pod_ready.go:38] duration metric: took 4.811483571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.327635   56818 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.327697   56818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:25.347434   56818 api_server.go:72] duration metric: took 5.067157997s to wait for apiserver process to appear ...
	I0315 07:23:25.347464   56818 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:25.347486   56818 api_server.go:253] Checking apiserver healthz at https://192.168.50.123:8444/healthz ...
	I0315 07:23:25.353790   56818 api_server.go:279] https://192.168.50.123:8444/healthz returned 200:
	ok
	I0315 07:23:25.355353   56818 api_server.go:141] control plane version: v1.28.4
	I0315 07:23:25.355376   56818 api_server.go:131] duration metric: took 7.903872ms to wait for apiserver health ...
	I0315 07:23:25.355403   56818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:25.531884   56818 system_pods.go:59] 9 kube-system pods found
	I0315 07:23:25.531913   56818 system_pods.go:61] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.531917   56818 system_pods.go:61] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.531920   56818 system_pods.go:61] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.531923   56818 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.531927   56818 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.531930   56818 system_pods.go:61] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.531932   56818 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.531938   56818 system_pods.go:61] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.531941   56818 system_pods.go:61] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.531951   56818 system_pods.go:74] duration metric: took 176.540782ms to wait for pod list to return data ...
	I0315 07:23:25.531960   56818 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:25.728585   56818 default_sa.go:45] found service account: "default"
	I0315 07:23:25.728612   56818 default_sa.go:55] duration metric: took 196.645536ms for default service account to be created ...
	I0315 07:23:25.728622   56818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:25.933674   56818 system_pods.go:86] 9 kube-system pods found
	I0315 07:23:25.933716   56818 system_pods.go:89] "coredns-5dd5756b68-4g87j" [6ba0fa41-99fc-40bb-b877-70017d0573c6] Running
	I0315 07:23:25.933724   56818 system_pods.go:89] "coredns-5dd5756b68-5gtx2" [acdf9648-b6c1-4427-9264-7b1b9c770690] Running
	I0315 07:23:25.933731   56818 system_pods.go:89] "etcd-default-k8s-diff-port-128870" [7297c9d5-7ce0-4f62-b132-bf26d7e5bbd9] Running
	I0315 07:23:25.933738   56818 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-128870" [43ece9f4-a6bf-4ccd-af7d-a1b93b9511cd] Running
	I0315 07:23:25.933746   56818 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-128870" [f5f2f11f-4817-4d6d-a1bc-1b9d225b5f4a] Running
	I0315 07:23:25.933752   56818 system_pods.go:89] "kube-proxy-97bfn" [a05d184b-c67c-43f2-8de4-1d170725deb3] Running
	I0315 07:23:25.933758   56818 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-128870" [3817607b-ff88-475d-9f9c-84bb6eb83f29] Running
	I0315 07:23:25.933768   56818 system_pods.go:89] "metrics-server-57f55c9bc5-59mcw" [da87c104-6961-4bb9-9fa3-b8bb104e2832] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:25.933777   56818 system_pods.go:89] "storage-provisioner" [01b5a36e-b3cd-4258-8e18-8efc850a2bb0] Running
	I0315 07:23:25.933788   56818 system_pods.go:126] duration metric: took 205.160074ms to wait for k8s-apps to be running ...
	I0315 07:23:25.933803   56818 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:25.933860   56818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:25.951093   56818 system_svc.go:56] duration metric: took 17.27976ms WaitForService to wait for kubelet
	I0315 07:23:25.951133   56818 kubeadm.go:576] duration metric: took 5.670862904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:25.951157   56818 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:26.127264   56818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:26.127296   56818 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:26.127310   56818 node_conditions.go:105] duration metric: took 176.148181ms to run NodePressure ...
	I0315 07:23:26.127321   56818 start.go:240] waiting for startup goroutines ...
	I0315 07:23:26.127330   56818 start.go:245] waiting for cluster config update ...
	I0315 07:23:26.127342   56818 start.go:254] writing updated cluster config ...
	I0315 07:23:26.127630   56818 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:26.183228   56818 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:23:26.185072   56818 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-128870" cluster and "default" namespace by default
	I0315 07:23:25.027197   57679 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:25.027228   57679 pod_ready.go:81] duration metric: took 4m0.007738039s for pod "metrics-server-57f55c9bc5-gwnxc" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:25.027237   57679 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:23:25.027243   57679 pod_ready.go:38] duration metric: took 4m4.059491076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:25.027258   57679 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:23:25.027300   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:25.027357   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:25.108565   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:25.108583   57679 cri.go:89] found id: ""
	I0315 07:23:25.108589   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:25.108635   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.113993   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:25.114057   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:25.155255   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:25.155272   57679 cri.go:89] found id: ""
	I0315 07:23:25.155279   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:25.155327   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.160207   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:25.160284   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:25.204769   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:25.204794   57679 cri.go:89] found id: ""
	I0315 07:23:25.204803   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:25.204868   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.209318   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:25.209396   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:25.249665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.249690   57679 cri.go:89] found id: ""
	I0315 07:23:25.249698   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:25.249768   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.254218   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:25.254298   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:25.312087   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.312111   57679 cri.go:89] found id: ""
	I0315 07:23:25.312120   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:25.312183   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.317669   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:25.317739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:25.361009   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.361028   57679 cri.go:89] found id: ""
	I0315 07:23:25.361039   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:25.361089   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.365732   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:25.365793   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:25.412411   57679 cri.go:89] found id: ""
	I0315 07:23:25.412432   57679 logs.go:276] 0 containers: []
	W0315 07:23:25.412440   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:25.412445   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:25.412514   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:25.451942   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.451971   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.451975   57679 cri.go:89] found id: ""
	I0315 07:23:25.451982   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:25.452027   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.456948   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:25.461133   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:25.461159   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:25.522939   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:25.522974   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:25.580937   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:25.580986   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:25.596673   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:25.596710   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:25.642636   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:25.642664   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:25.684783   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:25.684816   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:25.728987   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:25.729012   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:25.791700   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:25.791731   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:25.830176   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:25.830206   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:26.382758   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:26.382805   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:26.547547   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:26.547586   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:26.615743   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:26.615777   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:26.673110   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:26.673138   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:26.597274   56654 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace has status "Ready":"False"
	I0315 07:23:28.089350   56654 pod_ready.go:81] duration metric: took 4m0.000100368s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" ...
	E0315 07:23:28.089390   56654 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8bslq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0315 07:23:28.089414   56654 pod_ready.go:38] duration metric: took 4m10.058752368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:23:28.089446   56654 kubeadm.go:591] duration metric: took 4m17.742227312s to restartPrimaryControlPlane
	W0315 07:23:28.089513   56654 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0315 07:23:28.089536   56654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:23:29.223383   57679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:23:29.246920   57679 api_server.go:72] duration metric: took 4m15.052787663s to wait for apiserver process to appear ...
	I0315 07:23:29.246955   57679 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:23:29.246998   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:29.247068   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:29.297402   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.297431   57679 cri.go:89] found id: ""
	I0315 07:23:29.297441   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:29.297506   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.303456   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:29.303535   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:29.353661   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.353689   57679 cri.go:89] found id: ""
	I0315 07:23:29.353698   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:29.353758   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.359544   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:29.359604   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:29.410319   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:29.410342   57679 cri.go:89] found id: ""
	I0315 07:23:29.410351   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:29.410412   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.415751   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:29.415840   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:29.459665   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:29.459692   57679 cri.go:89] found id: ""
	I0315 07:23:29.459703   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:29.459770   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.464572   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:29.464644   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:29.506861   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:29.506889   57679 cri.go:89] found id: ""
	I0315 07:23:29.506898   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:29.506956   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.514127   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:29.514196   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:29.570564   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:29.570591   57679 cri.go:89] found id: ""
	I0315 07:23:29.570601   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:29.570660   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.575639   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:29.575733   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:29.621838   57679 cri.go:89] found id: ""
	I0315 07:23:29.621873   57679 logs.go:276] 0 containers: []
	W0315 07:23:29.621885   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:29.621893   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:29.621961   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:29.667340   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:29.667374   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:29.667380   57679 cri.go:89] found id: ""
	I0315 07:23:29.667390   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:29.667450   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.672103   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:29.677505   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:29.677600   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:29.826538   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:29.826592   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:29.900139   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:29.900175   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:29.969181   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:29.969211   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:30.019053   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:30.019090   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:30.058353   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:30.058383   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:30.100165   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:30.100193   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:30.158831   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:30.158868   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:30.203568   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:30.203601   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:30.644712   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:30.644750   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:30.701545   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:30.701579   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:30.721251   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:30.721286   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:30.776341   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:30.776375   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:33.335231   57679 api_server.go:253] Checking apiserver healthz at https://192.168.72.106:8443/healthz ...
	I0315 07:23:33.339961   57679 api_server.go:279] https://192.168.72.106:8443/healthz returned 200:
	ok
	I0315 07:23:33.341336   57679 api_server.go:141] control plane version: v1.29.0-rc.2
	I0315 07:23:33.341365   57679 api_server.go:131] duration metric: took 4.09440129s to wait for apiserver health ...
	I0315 07:23:33.341375   57679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:23:33.341402   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:23:33.341467   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:23:33.383915   57679 cri.go:89] found id: "2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:33.383944   57679 cri.go:89] found id: ""
	I0315 07:23:33.383955   57679 logs.go:276] 1 containers: [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535]
	I0315 07:23:33.384013   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.389188   57679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:23:33.389274   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:23:33.431258   57679 cri.go:89] found id: "1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:33.431286   57679 cri.go:89] found id: ""
	I0315 07:23:33.431300   57679 logs.go:276] 1 containers: [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731]
	I0315 07:23:33.431368   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.436265   57679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:23:33.436332   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:23:33.484562   57679 cri.go:89] found id: "3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:33.484585   57679 cri.go:89] found id: ""
	I0315 07:23:33.484592   57679 logs.go:276] 1 containers: [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6]
	I0315 07:23:33.484649   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.489443   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:23:33.489519   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:23:33.537102   57679 cri.go:89] found id: "461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:33.537130   57679 cri.go:89] found id: ""
	I0315 07:23:33.537141   57679 logs.go:276] 1 containers: [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c]
	I0315 07:23:33.537200   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.546560   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:23:33.546632   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:23:33.604178   57679 cri.go:89] found id: "ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:33.604204   57679 cri.go:89] found id: ""
	I0315 07:23:33.604213   57679 logs.go:276] 1 containers: [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f]
	I0315 07:23:33.604274   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.608799   57679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:23:33.608885   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:23:33.648378   57679 cri.go:89] found id: "a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:33.648404   57679 cri.go:89] found id: ""
	I0315 07:23:33.648412   57679 logs.go:276] 1 containers: [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c]
	I0315 07:23:33.648459   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.652658   57679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:23:33.652742   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:23:33.695648   57679 cri.go:89] found id: ""
	I0315 07:23:33.695672   57679 logs.go:276] 0 containers: []
	W0315 07:23:33.695680   57679 logs.go:278] No container was found matching "kindnet"
	I0315 07:23:33.695685   57679 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:23:33.695739   57679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:23:33.744606   57679 cri.go:89] found id: "c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:33.744626   57679 cri.go:89] found id: "4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:33.744633   57679 cri.go:89] found id: ""
	I0315 07:23:33.744642   57679 logs.go:276] 2 containers: [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9]
	I0315 07:23:33.744701   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.749535   57679 ssh_runner.go:195] Run: which crictl
	I0315 07:23:33.753844   57679 logs.go:123] Gathering logs for kubelet ...
	I0315 07:23:33.753869   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:23:33.808610   57679 logs.go:123] Gathering logs for dmesg ...
	I0315 07:23:33.808645   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:23:33.824330   57679 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:23:33.824358   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:23:33.947586   57679 logs.go:123] Gathering logs for etcd [1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731] ...
	I0315 07:23:33.947615   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c840a3842d52c2b8022f3017adbd353be0d58919b669982d3edc0ebc8732731"
	I0315 07:23:34.029067   57679 logs.go:123] Gathering logs for coredns [3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6] ...
	I0315 07:23:34.029103   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3a341887d9bed6b67790d8522ff7ec52bc8a547170516317052f1346e303a6"
	I0315 07:23:34.068543   57679 logs.go:123] Gathering logs for storage-provisioner [c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971] ...
	I0315 07:23:34.068578   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1c3aa6c23ece059230eb88be2cf6174ab6e2908fc1dc001e355a5ae64fd7971"
	I0315 07:23:34.109228   57679 logs.go:123] Gathering logs for kube-apiserver [2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535] ...
	I0315 07:23:34.109255   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2820074ba55a66dc536e7d2a43068f77bef71deebbae98a81bfb2e723a060535"
	I0315 07:23:34.161497   57679 logs.go:123] Gathering logs for kube-scheduler [461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c] ...
	I0315 07:23:34.161528   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461e402c50f1cb687a8107180c843651d290300266ad9d0b77f4855d01b5678c"
	I0315 07:23:34.203057   57679 logs.go:123] Gathering logs for kube-proxy [ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f] ...
	I0315 07:23:34.203086   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca87ab91e305f8d6d4b30b3996ebbbd7091b562bdefee3bb80b51df8b7514d1f"
	I0315 07:23:34.246840   57679 logs.go:123] Gathering logs for kube-controller-manager [a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c] ...
	I0315 07:23:34.246879   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a234f9f8e0d8deea8a69cc74326689ed136eadba1e7da18ac2223044a00d701c"
	I0315 07:23:34.308663   57679 logs.go:123] Gathering logs for storage-provisioner [4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9] ...
	I0315 07:23:34.308699   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba10dcc803b21cb781b20dc6cafafed7d8a32bbbdd90cf614ce60d6578329d9"
	I0315 07:23:34.350721   57679 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:23:34.350755   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:23:34.720198   57679 logs.go:123] Gathering logs for container status ...
	I0315 07:23:34.720237   57679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:23:37.281980   57679 system_pods.go:59] 8 kube-system pods found
	I0315 07:23:37.282007   57679 system_pods.go:61] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.282012   57679 system_pods.go:61] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.282015   57679 system_pods.go:61] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.282019   57679 system_pods.go:61] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.282022   57679 system_pods.go:61] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.282025   57679 system_pods.go:61] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.282032   57679 system_pods.go:61] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.282036   57679 system_pods.go:61] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.282045   57679 system_pods.go:74] duration metric: took 3.940662723s to wait for pod list to return data ...
	I0315 07:23:37.282054   57679 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:23:37.284362   57679 default_sa.go:45] found service account: "default"
	I0315 07:23:37.284388   57679 default_sa.go:55] duration metric: took 2.326334ms for default service account to be created ...
	I0315 07:23:37.284399   57679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:23:37.289958   57679 system_pods.go:86] 8 kube-system pods found
	I0315 07:23:37.289983   57679 system_pods.go:89] "coredns-76f75df574-tc5zh" [2cc47f60-adca-4c07-9366-ac2f84274042] Running
	I0315 07:23:37.289988   57679 system_pods.go:89] "etcd-no-preload-184055" [c4f8e07c-ded2-4360-a547-8a33d6be1d95] Running
	I0315 07:23:37.289993   57679 system_pods.go:89] "kube-apiserver-no-preload-184055" [976ae286-49e6-4711-a9fa-7a11aee6c6f9] Running
	I0315 07:23:37.289997   57679 system_pods.go:89] "kube-controller-manager-no-preload-184055" [3dbb09a8-ff68-4919-af83-299c492204e4] Running
	I0315 07:23:37.290001   57679 system_pods.go:89] "kube-proxy-977jm" [33e526c5-d0ee-46b7-a357-1e6fe36dcd9d] Running
	I0315 07:23:37.290005   57679 system_pods.go:89] "kube-scheduler-no-preload-184055" [de287716-3888-48a6-9270-f8c361c151a5] Running
	I0315 07:23:37.290011   57679 system_pods.go:89] "metrics-server-57f55c9bc5-gwnxc" [abff20ab-2240-4106-b3fc-ffce142e8069] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:23:37.290016   57679 system_pods.go:89] "storage-provisioner" [3d1c5fc1-ba80-48d6-a195-029b3a11abd5] Running
	I0315 07:23:37.290025   57679 system_pods.go:126] duration metric: took 5.621107ms to wait for k8s-apps to be running ...
	I0315 07:23:37.290038   57679 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:23:37.290078   57679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:23:37.308664   57679 system_svc.go:56] duration metric: took 18.618186ms WaitForService to wait for kubelet
	I0315 07:23:37.308698   57679 kubeadm.go:576] duration metric: took 4m23.114571186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:23:37.308724   57679 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:23:37.311673   57679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:23:37.311693   57679 node_conditions.go:123] node cpu capacity is 2
	I0315 07:23:37.311706   57679 node_conditions.go:105] duration metric: took 2.976195ms to run NodePressure ...
	I0315 07:23:37.311719   57679 start.go:240] waiting for startup goroutines ...
	I0315 07:23:37.311728   57679 start.go:245] waiting for cluster config update ...
	I0315 07:23:37.311741   57679 start.go:254] writing updated cluster config ...
	I0315 07:23:37.312001   57679 ssh_runner.go:195] Run: rm -f paused
	I0315 07:23:37.361989   57679 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0315 07:23:37.364040   57679 out.go:177] * Done! kubectl is now configured to use "no-preload-184055" cluster and "default" namespace by default
	I0315 07:24:00.393591   56654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.304032154s)
	I0315 07:24:00.393676   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:00.410127   56654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:24:00.420913   56654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:00.431516   56654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:00.431542   56654 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:00.431595   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:00.442083   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:00.442148   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:00.452980   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:00.462774   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:00.462850   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:00.473787   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.483835   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:00.483887   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:00.494477   56654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:00.505377   56654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:00.505444   56654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:00.516492   56654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:00.574858   56654 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:24:00.574982   56654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:00.733627   56654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:00.733760   56654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:00.733870   56654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:00.955927   56654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:00.957738   56654 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:00.957836   56654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:00.957919   56654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:00.958044   56654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:00.958160   56654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:00.958245   56654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:00.958319   56654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:00.958399   56654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:00.958490   56654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:00.959006   56654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:00.959463   56654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:00.959936   56654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:00.960007   56654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:01.092118   56654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:01.648594   56654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:01.872311   56654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:01.967841   56654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:01.969326   56654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:01.973294   56654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:01.975373   56654 out.go:204]   - Booting up control plane ...
	I0315 07:24:01.975507   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:01.975619   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:01.976012   56654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:01.998836   56654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:02.001054   56654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:02.001261   56654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:02.137106   56654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:08.140408   56654 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003265 seconds
	I0315 07:24:08.140581   56654 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:24:08.158780   56654 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:24:08.689685   56654 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:24:08.689956   56654 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-709708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:24:09.205131   56654 kubeadm.go:309] [bootstrap-token] Using token: sigk62.zuko1mvm18pxemkc
	I0315 07:24:09.206578   56654 out.go:204]   - Configuring RBAC rules ...
	I0315 07:24:09.206689   56654 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:24:09.220110   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:24:09.229465   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:24:09.233683   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:24:09.240661   56654 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:24:09.245300   56654 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:24:09.265236   56654 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:24:09.490577   56654 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:24:09.631917   56654 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:24:09.632947   56654 kubeadm.go:309] 
	I0315 07:24:09.633052   56654 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:24:09.633066   56654 kubeadm.go:309] 
	I0315 07:24:09.633180   56654 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:24:09.633198   56654 kubeadm.go:309] 
	I0315 07:24:09.633226   56654 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:24:09.633305   56654 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:24:09.633384   56654 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:24:09.633394   56654 kubeadm.go:309] 
	I0315 07:24:09.633473   56654 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:24:09.633492   56654 kubeadm.go:309] 
	I0315 07:24:09.633560   56654 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:24:09.633569   56654 kubeadm.go:309] 
	I0315 07:24:09.633648   56654 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:24:09.633754   56654 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:24:09.633853   56654 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:24:09.633870   56654 kubeadm.go:309] 
	I0315 07:24:09.633992   56654 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:24:09.634060   56654 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:24:09.634067   56654 kubeadm.go:309] 
	I0315 07:24:09.634134   56654 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634251   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b \
	I0315 07:24:09.634279   56654 kubeadm.go:309] 	--control-plane 
	I0315 07:24:09.634287   56654 kubeadm.go:309] 
	I0315 07:24:09.634390   56654 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:24:09.634402   56654 kubeadm.go:309] 
	I0315 07:24:09.634467   56654 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sigk62.zuko1mvm18pxemkc \
	I0315 07:24:09.634554   56654 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e16b79490426381891fbd64842324fa4349a9956655d765567dd5f112d14de2b 
	I0315 07:24:09.635408   56654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:24:09.635440   56654 cni.go:84] Creating CNI manager for ""
	I0315 07:24:09.635453   56654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 07:24:09.637959   56654 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 07:24:09.639507   56654 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 07:24:09.653761   56654 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 07:24:09.707123   56654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:24:09.707255   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:09.707289   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709708 minikube.k8s.io/updated_at=2024_03_15T07_24_09_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-709708 minikube.k8s.io/primary=true
	I0315 07:24:10.001845   56654 ops.go:34] apiserver oom_adj: -16
	I0315 07:24:10.001920   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:10.502214   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.002618   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:11.502991   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.002035   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:12.502053   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.002078   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:13.502779   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.002495   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:14.502594   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.002964   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:15.502151   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.002520   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:16.502704   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.001961   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:17.502532   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.002917   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:18.502147   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.002882   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:19.502008   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.002805   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:20.502026   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.002242   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:21.502881   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.002756   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.502373   56654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:24:22.603876   56654 kubeadm.go:1107] duration metric: took 12.896678412s to wait for elevateKubeSystemPrivileges
	W0315 07:24:22.603920   56654 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:24:22.603931   56654 kubeadm.go:393] duration metric: took 5m12.309521438s to StartCluster
	I0315 07:24:22.603952   56654 settings.go:142] acquiring lock: {Name:mk89e6e6869098fd2d60b1998fac654e718e2877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.604047   56654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:24:22.605863   56654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/kubeconfig: {Name:mk7c2f7883e3fa5737254b40b6f9a491ec48ab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:24:22.606170   56654 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 07:24:22.608123   56654 out.go:177] * Verifying Kubernetes components...
	I0315 07:24:22.606248   56654 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:24:22.606384   56654 config.go:182] Loaded profile config "embed-certs-709708": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:24:22.609690   56654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:24:22.608217   56654 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709708"
	I0315 07:24:22.609818   56654 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709708"
	I0315 07:24:22.608227   56654 addons.go:69] Setting metrics-server=true in profile "embed-certs-709708"
	W0315 07:24:22.609835   56654 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:24:22.609864   56654 addons.go:234] Setting addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:22.609873   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	W0315 07:24:22.609877   56654 addons.go:243] addon metrics-server should already be in state true
	I0315 07:24:22.609911   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.608237   56654 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709708"
	I0315 07:24:22.610005   56654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709708"
	I0315 07:24:22.610268   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610307   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610308   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610351   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.610384   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.610401   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.626781   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0315 07:24:22.627302   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.627512   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0315 07:24:22.627955   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.627990   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628051   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.628391   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.628553   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.628580   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.628726   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.629068   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.629139   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0315 07:24:22.629457   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.630158   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.630196   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.630795   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.630806   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.631928   56654 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709708"
	W0315 07:24:22.631939   56654 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:24:22.631961   56654 host.go:66] Checking if "embed-certs-709708" exists ...
	I0315 07:24:22.632184   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.632203   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.632434   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.632957   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.633002   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.647875   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0315 07:24:22.647906   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0315 07:24:22.648289   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648618   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.648766   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.648790   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649194   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.649249   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.649252   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.649803   56654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 07:24:22.649839   56654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 07:24:22.650065   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.650306   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.651262   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I0315 07:24:22.651784   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.652234   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.652363   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.652382   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.654525   56654 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:24:22.652809   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.655821   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:24:22.655842   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:24:22.655866   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.655980   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.658998   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.660773   56654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:24:22.659588   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.660235   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.662207   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.662245   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.662272   56654 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.662281   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:24:22.662295   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.662347   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.662527   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.662706   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.665737   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.665987   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.666008   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.666202   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.666309   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.666430   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.666530   56654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0315 07:24:22.666667   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.666853   56654 main.go:141] libmachine: () Calling .GetVersion
	I0315 07:24:22.667235   56654 main.go:141] libmachine: Using API Version  1
	I0315 07:24:22.667250   56654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 07:24:22.667520   56654 main.go:141] libmachine: () Calling .GetMachineName
	I0315 07:24:22.667689   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetState
	I0315 07:24:22.669665   56654 main.go:141] libmachine: (embed-certs-709708) Calling .DriverName
	I0315 07:24:22.669907   56654 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:22.669924   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:24:22.669941   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHHostname
	I0315 07:24:22.672774   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673235   56654 main.go:141] libmachine: (embed-certs-709708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:25:ab", ip: ""} in network mk-embed-certs-709708: {Iface:virbr1 ExpiryTime:2024-03-15 08:08:57 +0000 UTC Type:0 Mac:52:54:00:46:25:ab Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-709708 Clientid:01:52:54:00:46:25:ab}
	I0315 07:24:22.673264   56654 main.go:141] libmachine: (embed-certs-709708) DBG | domain embed-certs-709708 has defined IP address 192.168.39.80 and MAC address 52:54:00:46:25:ab in network mk-embed-certs-709708
	I0315 07:24:22.673337   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHPort
	I0315 07:24:22.673498   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHKeyPath
	I0315 07:24:22.673626   56654 main.go:141] libmachine: (embed-certs-709708) Calling .GetSSHUsername
	I0315 07:24:22.673732   56654 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/embed-certs-709708/id_rsa Username:docker}
	I0315 07:24:22.817572   56654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:24:22.837781   56654 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850719   56654 node_ready.go:49] node "embed-certs-709708" has status "Ready":"True"
	I0315 07:24:22.850739   56654 node_ready.go:38] duration metric: took 12.926045ms for node "embed-certs-709708" to be "Ready" ...
	I0315 07:24:22.850748   56654 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.854656   56654 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860210   56654 pod_ready.go:92] pod "etcd-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.860228   56654 pod_ready.go:81] duration metric: took 5.549905ms for pod "etcd-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.860235   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864895   56654 pod_ready.go:92] pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.864916   56654 pod_ready.go:81] duration metric: took 4.673817ms for pod "kube-apiserver-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.864927   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869645   56654 pod_ready.go:92] pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.869662   56654 pod_ready.go:81] duration metric: took 4.727813ms for pod "kube-controller-manager-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.869672   56654 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875531   56654 pod_ready.go:92] pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace has status "Ready":"True"
	I0315 07:24:22.875552   56654 pod_ready.go:81] duration metric: took 5.873575ms for pod "kube-scheduler-embed-certs-709708" in "kube-system" namespace to be "Ready" ...
	I0315 07:24:22.875559   56654 pod_ready.go:38] duration metric: took 24.802532ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:24:22.875570   56654 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:24:22.875613   56654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:24:22.918043   56654 api_server.go:72] duration metric: took 311.835798ms to wait for apiserver process to appear ...
	I0315 07:24:22.918068   56654 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:24:22.918083   56654 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0315 07:24:22.926008   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:24:22.927745   56654 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0315 07:24:22.931466   56654 api_server.go:141] control plane version: v1.28.4
	I0315 07:24:22.931494   56654 api_server.go:131] duration metric: took 13.421255ms to wait for apiserver health ...
	I0315 07:24:22.931505   56654 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:24:22.933571   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:24:23.060163   56654 system_pods.go:59] 5 kube-system pods found
	I0315 07:24:23.060193   56654 system_pods.go:61] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.060199   56654 system_pods.go:61] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.060203   56654 system_pods.go:61] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.060209   56654 system_pods.go:61] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.060214   56654 system_pods.go:61] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.060223   56654 system_pods.go:74] duration metric: took 128.712039ms to wait for pod list to return data ...
	I0315 07:24:23.060233   56654 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:24:23.087646   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:24:23.087694   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:24:23.143482   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:24:23.143520   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:24:23.187928   56654 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.187957   56654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:24:23.212632   56654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:24:23.269897   56654 default_sa.go:45] found service account: "default"
	I0315 07:24:23.269928   56654 default_sa.go:55] duration metric: took 209.686721ms for default service account to be created ...
	I0315 07:24:23.269940   56654 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:24:23.456322   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.456355   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456367   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.456376   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.456384   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.456393   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.456401   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.456405   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.456422   56654 retry.go:31] will retry after 213.945327ms: missing components: kube-dns, kube-proxy
	I0315 07:24:23.677888   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:23.677919   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677926   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:23.677932   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:23.677938   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:23.677944   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:23.677949   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 07:24:23.677953   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:23.677967   56654 retry.go:31] will retry after 314.173951ms: missing components: kube-dns, kube-proxy
	I0315 07:24:24.028684   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.028726   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028740   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.028750   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.028758   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.028765   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.028778   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.028787   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.028809   56654 retry.go:31] will retry after 357.807697ms: missing components: kube-dns
	I0315 07:24:24.428017   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.428057   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428065   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 07:24:24.428072   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.428077   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.428082   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.428086   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.428091   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.428110   56654 retry.go:31] will retry after 516.115893ms: missing components: kube-dns
	I0315 07:24:24.950681   56654 system_pods.go:86] 7 kube-system pods found
	I0315 07:24:24.950715   56654 system_pods.go:89] "coredns-5dd5756b68-pqjfs" [549c60e3-dd25-4fb5-8172-bb3e916b619f] Running
	I0315 07:24:24.950724   56654 system_pods.go:89] "coredns-5dd5756b68-v2mxd" [feedfa3b-a7de-471c-9b53-7b1eda6279dc] Running
	I0315 07:24:24.950730   56654 system_pods.go:89] "etcd-embed-certs-709708" [32bd632a-55d3-460c-8570-0ef6d0334325] Running
	I0315 07:24:24.950737   56654 system_pods.go:89] "kube-apiserver-embed-certs-709708" [0d766b6d-4bd3-498a-9c40-ba72b00e02af] Running
	I0315 07:24:24.950743   56654 system_pods.go:89] "kube-controller-manager-embed-certs-709708" [9010bf1a-5f4e-4baa-9e84-bc9fc5efa7f3] Running
	I0315 07:24:24.950749   56654 system_pods.go:89] "kube-proxy-8pd5c" [46c8415c-ce4b-48ce-be7f-f9a313a1f969] Running
	I0315 07:24:24.950755   56654 system_pods.go:89] "kube-scheduler-embed-certs-709708" [be685bae-5658-4a38-a77f-a93c5773db96] Running
	I0315 07:24:24.950764   56654 system_pods.go:126] duration metric: took 1.680817192s to wait for k8s-apps to be running ...
	I0315 07:24:24.950774   56654 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:24:24.950825   56654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:25.286592   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.360550084s)
	I0315 07:24:25.286642   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286593   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.352989823s)
	I0315 07:24:25.286656   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.286732   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.286840   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287214   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287266   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287282   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287285   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287295   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287304   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287273   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287248   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287361   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.287379   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.287556   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287697   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287634   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.287655   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.287805   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.287680   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302577   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.302606   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.302899   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.302928   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.302938   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337010   56654 system_svc.go:56] duration metric: took 386.226838ms WaitForService to wait for kubelet
	I0315 07:24:25.337043   56654 kubeadm.go:576] duration metric: took 2.730837008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:24:25.337078   56654 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:24:25.337119   56654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.124449207s)
	I0315 07:24:25.337162   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337178   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337473   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337514   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337522   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337529   56654 main.go:141] libmachine: Making call to close driver server
	I0315 07:24:25.337536   56654 main.go:141] libmachine: (embed-certs-709708) Calling .Close
	I0315 07:24:25.337822   56654 main.go:141] libmachine: (embed-certs-709708) DBG | Closing plugin on server side
	I0315 07:24:25.337865   56654 main.go:141] libmachine: Successfully made call to close driver server
	I0315 07:24:25.337877   56654 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 07:24:25.337890   56654 addons.go:470] Verifying addon metrics-server=true in "embed-certs-709708"
	I0315 07:24:25.340315   56654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0315 07:24:25.341924   56654 addons.go:505] duration metric: took 2.735676913s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0315 07:24:25.349938   56654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 07:24:25.349958   56654 node_conditions.go:123] node cpu capacity is 2
	I0315 07:24:25.349968   56654 node_conditions.go:105] duration metric: took 12.884794ms to run NodePressure ...
	I0315 07:24:25.349980   56654 start.go:240] waiting for startup goroutines ...
	I0315 07:24:25.349987   56654 start.go:245] waiting for cluster config update ...
	I0315 07:24:25.349995   56654 start.go:254] writing updated cluster config ...
	I0315 07:24:25.350250   56654 ssh_runner.go:195] Run: rm -f paused
	I0315 07:24:25.406230   56654 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:24:25.408287   56654 out.go:177] * Done! kubectl is now configured to use "embed-certs-709708" cluster and "default" namespace by default
	I0315 07:24:35.350302   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:24:35.350393   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:24:35.351921   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:35.351976   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:35.352067   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:35.352191   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:35.352325   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:35.352445   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:35.354342   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:35.354413   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:35.354492   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:35.354593   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:35.354671   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:35.354736   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:35.354779   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:35.354829   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:35.354877   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:35.354934   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:35.354996   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:35.355032   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:35.355076   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:35.355116   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:35.355157   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:35.355210   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:35.355253   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:35.355360   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:35.355470   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:35.355531   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:35.355611   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:35.358029   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:35.358113   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:35.358200   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:35.358272   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:35.358379   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:35.358621   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:24:35.358682   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:24:35.358767   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.358974   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359037   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359235   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359303   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359517   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359592   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.359766   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.359866   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:24:35.360089   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:24:35.360108   57277 kubeadm.go:309] 
	I0315 07:24:35.360150   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:24:35.360212   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:24:35.360220   57277 kubeadm.go:309] 
	I0315 07:24:35.360249   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:24:35.360357   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:24:35.360546   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:24:35.360558   57277 kubeadm.go:309] 
	I0315 07:24:35.360697   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:24:35.360734   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:24:35.360768   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:24:35.360772   57277 kubeadm.go:309] 
	I0315 07:24:35.360855   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:24:35.360927   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:24:35.360934   57277 kubeadm.go:309] 
	I0315 07:24:35.361057   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:24:35.361152   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:24:35.361251   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:24:35.361361   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:24:35.361386   57277 kubeadm.go:309] 
	W0315 07:24:35.361486   57277 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0315 07:24:35.361547   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0315 07:24:36.531032   57277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.169455327s)
	I0315 07:24:36.531111   57277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:24:36.546033   57277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:24:36.556242   57277 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:24:36.556263   57277 kubeadm.go:156] found existing configuration files:
	
	I0315 07:24:36.556319   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:24:36.566209   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:24:36.566271   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:24:36.576101   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:24:36.586297   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:24:36.586389   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:24:36.597562   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.607690   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:24:36.607754   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:24:36.620114   57277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:24:36.629920   57277 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:24:36.629980   57277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:24:36.640423   57277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 07:24:36.709111   57277 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0315 07:24:36.709179   57277 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:24:36.875793   57277 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:24:36.875935   57277 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:24:36.876039   57277 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:24:37.064006   57277 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:24:37.067032   57277 out.go:204]   - Generating certificates and keys ...
	I0315 07:24:37.067125   57277 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:24:37.067237   57277 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:24:37.067376   57277 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0315 07:24:37.067444   57277 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0315 07:24:37.067531   57277 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0315 07:24:37.067630   57277 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0315 07:24:37.067715   57277 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0315 07:24:37.067817   57277 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0315 07:24:37.067953   57277 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0315 07:24:37.068067   57277 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0315 07:24:37.068125   57277 kubeadm.go:309] [certs] Using the existing "sa" key
	I0315 07:24:37.068230   57277 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:24:37.225324   57277 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:24:37.392633   57277 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:24:37.608570   57277 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:24:37.737553   57277 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:24:37.753302   57277 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:24:37.754611   57277 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:24:37.754678   57277 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:24:37.927750   57277 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:24:37.929505   57277 out.go:204]   - Booting up control plane ...
	I0315 07:24:37.929626   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:24:37.937053   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:24:37.937150   57277 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:24:37.937378   57277 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:24:37.943172   57277 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:25:17.946020   57277 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0315 07:25:17.946242   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:17.946472   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:22.946986   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:22.947161   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:32.947938   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:32.948242   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:25:52.948437   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:25:52.948689   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947655   57277 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0315 07:26:32.947879   57277 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0315 07:26:32.947906   57277 kubeadm.go:309] 
	I0315 07:26:32.947974   57277 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0315 07:26:32.948133   57277 kubeadm.go:309] 		timed out waiting for the condition
	I0315 07:26:32.948154   57277 kubeadm.go:309] 
	I0315 07:26:32.948202   57277 kubeadm.go:309] 	This error is likely caused by:
	I0315 07:26:32.948249   57277 kubeadm.go:309] 		- The kubelet is not running
	I0315 07:26:32.948402   57277 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0315 07:26:32.948424   57277 kubeadm.go:309] 
	I0315 07:26:32.948584   57277 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0315 07:26:32.948637   57277 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0315 07:26:32.948689   57277 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0315 07:26:32.948707   57277 kubeadm.go:309] 
	I0315 07:26:32.948818   57277 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0315 07:26:32.948954   57277 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0315 07:26:32.948971   57277 kubeadm.go:309] 
	I0315 07:26:32.949094   57277 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0315 07:26:32.949207   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0315 07:26:32.949305   57277 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0315 07:26:32.949422   57277 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0315 07:26:32.949439   57277 kubeadm.go:309] 
	I0315 07:26:32.951015   57277 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:26:32.951137   57277 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0315 07:26:32.951233   57277 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0315 07:26:32.951324   57277 kubeadm.go:393] duration metric: took 7m59.173049276s to StartCluster
	I0315 07:26:32.951374   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:26:32.951440   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:26:33.003448   57277 cri.go:89] found id: ""
	I0315 07:26:33.003480   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.003488   57277 logs.go:278] No container was found matching "kube-apiserver"
	I0315 07:26:33.003494   57277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0315 07:26:33.003554   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:26:33.045008   57277 cri.go:89] found id: ""
	I0315 07:26:33.045044   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.045051   57277 logs.go:278] No container was found matching "etcd"
	I0315 07:26:33.045057   57277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0315 07:26:33.045110   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:26:33.090459   57277 cri.go:89] found id: ""
	I0315 07:26:33.090487   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.090496   57277 logs.go:278] No container was found matching "coredns"
	I0315 07:26:33.090501   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:26:33.090549   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:26:33.131395   57277 cri.go:89] found id: ""
	I0315 07:26:33.131424   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.131436   57277 logs.go:278] No container was found matching "kube-scheduler"
	I0315 07:26:33.131444   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:26:33.131506   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:26:33.171876   57277 cri.go:89] found id: ""
	I0315 07:26:33.171911   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.171923   57277 logs.go:278] No container was found matching "kube-proxy"
	I0315 07:26:33.171931   57277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:26:33.171989   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:26:33.214298   57277 cri.go:89] found id: ""
	I0315 07:26:33.214325   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.214333   57277 logs.go:278] No container was found matching "kube-controller-manager"
	I0315 07:26:33.214340   57277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0315 07:26:33.214405   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:26:33.266593   57277 cri.go:89] found id: ""
	I0315 07:26:33.266690   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.266703   57277 logs.go:278] No container was found matching "kindnet"
	I0315 07:26:33.266711   57277 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:26:33.266776   57277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:26:33.312019   57277 cri.go:89] found id: ""
	I0315 07:26:33.312053   57277 logs.go:276] 0 containers: []
	W0315 07:26:33.312061   57277 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0315 07:26:33.312070   57277 logs.go:123] Gathering logs for CRI-O ...
	I0315 07:26:33.312085   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0315 07:26:33.422127   57277 logs.go:123] Gathering logs for container status ...
	I0315 07:26:33.422160   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:26:33.470031   57277 logs.go:123] Gathering logs for kubelet ...
	I0315 07:26:33.470064   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0315 07:26:33.526968   57277 logs.go:123] Gathering logs for dmesg ...
	I0315 07:26:33.527002   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:26:33.542343   57277 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:26:33.542378   57277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0315 07:26:33.623229   57277 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0315 07:26:33.623266   57277 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0315 07:26:33.623318   57277 out.go:239] * 
	W0315 07:26:33.623402   57277 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.623438   57277 out.go:239] * 
	W0315 07:26:33.624306   57277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:26:33.627641   57277 out.go:177] 
	W0315 07:26:33.629221   57277 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0315 07:26:33.629270   57277 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0315 07:26:33.629293   57277 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0315 07:26:33.630909   57277 out.go:177] 
	
	
	==> CRI-O <==
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.434822643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488267434801326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80092810-b3ea-41d0-b3a3-a7216acd7eac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.435472020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5578cf88-f04d-4721-946e-cf3b9aad571a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.435584869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5578cf88-f04d-4721-946e-cf3b9aad571a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.435618187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5578cf88-f04d-4721-946e-cf3b9aad571a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.467212519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a12246f-4406-4843-949d-3aa2e5ccc92d name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.467310563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a12246f-4406-4843-949d-3aa2e5ccc92d name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.468314020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dd64c29-bf58-4466-a07a-a0ae5fd02287 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.468775109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488267468745363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dd64c29-bf58-4466-a07a-a0ae5fd02287 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.469285755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85c605ce-c623-4501-8cdd-90b25b186b6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.469336807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85c605ce-c623-4501-8cdd-90b25b186b6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.469369950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=85c605ce-c623-4501-8cdd-90b25b186b6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.505823500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b1afe42-0518-4b0e-9026-25f296a0c8dc name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.505907533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b1afe42-0518-4b0e-9026-25f296a0c8dc name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.508158517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24987fde-490d-400d-96bd-006ecbc1a56c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.508691066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488267508655750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24987fde-490d-400d-96bd-006ecbc1a56c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.509314778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6714b2ed-ea6e-406d-9c9f-dbf7b6207e4d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.509390855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6714b2ed-ea6e-406d-9c9f-dbf7b6207e4d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.509452829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6714b2ed-ea6e-406d-9c9f-dbf7b6207e4d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.543603115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e711b0b1-cd14-4cca-8a2b-376158fa28ae name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.543682063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e711b0b1-cd14-4cca-8a2b-376158fa28ae name=/runtime.v1.RuntimeService/Version
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.545461559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3018bd57-74ad-4033-8fc0-f9608b480040 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.545923023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710488267545895503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3018bd57-74ad-4033-8fc0-f9608b480040 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.546639500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0cdcb1f-abfd-451f-be92-2f1e00452884 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.546717960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0cdcb1f-abfd-451f-be92-2f1e00452884 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 07:37:47 old-k8s-version-981420 crio[649]: time="2024-03-15 07:37:47.546760991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f0cdcb1f-abfd-451f-be92-2f1e00452884 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar15 07:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054732] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711901] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.844497] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.626265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.561722] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.063802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070293] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.224970] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.142626] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.286086] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.591583] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.077354] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095694] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +9.234531] kauditd_printk_skb: 46 callbacks suppressed
	[Mar15 07:22] systemd-fstab-generator[4974]: Ignoring "noauto" option for root device
	[Mar15 07:24] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.078685] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 07:37:47 up 19 min,  0 users,  load average: 0.00, 0.04, 0.04
	Linux old-k8s-version-981420 5.10.207 #1 SMP Fri Mar 15 04:22:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000819a70)
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: goroutine 159 [select]:
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000dfbef0, 0x4f0ac20, 0xc000050aa0, 0x1, 0xc00009e0c0)
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000df2000, 0xc00009e0c0)
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c4a250, 0xc000c32720)
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 15 07:37:42 old-k8s-version-981420 kubelet[6721]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 15 07:37:42 old-k8s-version-981420 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 15 07:37:42 old-k8s-version-981420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 15 07:37:43 old-k8s-version-981420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 136.
	Mar 15 07:37:43 old-k8s-version-981420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 15 07:37:43 old-k8s-version-981420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 15 07:37:43 old-k8s-version-981420 kubelet[6730]: I0315 07:37:43.193602    6730 server.go:416] Version: v1.20.0
	Mar 15 07:37:43 old-k8s-version-981420 kubelet[6730]: I0315 07:37:43.193926    6730 server.go:837] Client rotation is on, will bootstrap in background
	Mar 15 07:37:43 old-k8s-version-981420 kubelet[6730]: I0315 07:37:43.195926    6730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 15 07:37:43 old-k8s-version-981420 kubelet[6730]: W0315 07:37:43.196853    6730 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 15 07:37:43 old-k8s-version-981420 kubelet[6730]: I0315 07:37:43.197165    6730 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 2 (245.345754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-981420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.58s)

                                                
                                    

Test pass (254/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.47
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 23.42
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 41.78
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.56
31 TestOffline 111.8
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 150.77
38 TestAddons/parallel/Registry 17.88
40 TestAddons/parallel/InspektorGadget 12.87
41 TestAddons/parallel/MetricsServer 6.85
42 TestAddons/parallel/HelmTiller 15.09
44 TestAddons/parallel/CSI 67.66
45 TestAddons/parallel/Headlamp 19.43
46 TestAddons/parallel/CloudSpanner 6.77
47 TestAddons/parallel/LocalPath 59.67
48 TestAddons/parallel/NvidiaDevicePlugin 6.54
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 58.56
55 TestCertExpiration 285.59
57 TestForceSystemdFlag 73.82
58 TestForceSystemdEnv 62.36
60 TestKVMDriverInstallOrUpdate 5.5
64 TestErrorSpam/setup 45.36
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.7
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 5.76
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.03
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 39.3
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
81 TestFunctional/serial/CacheCmd/cache/add_local 2.23
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 32.31
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.61
92 TestFunctional/serial/LogsFileCmd 1.66
93 TestFunctional/serial/InvalidService 4.29
95 TestFunctional/parallel/ConfigCmd 0.41
96 TestFunctional/parallel/DashboardCmd 18.73
97 TestFunctional/parallel/DryRun 0.33
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.42
103 TestFunctional/parallel/ServiceCmdConnect 10.7
104 TestFunctional/parallel/AddonsCmd 0.23
105 TestFunctional/parallel/PersistentVolumeClaim 42.62
107 TestFunctional/parallel/SSHCmd 0.52
108 TestFunctional/parallel/CpCmd 1.57
109 TestFunctional/parallel/MySQL 38.35
110 TestFunctional/parallel/FileSync 0.28
111 TestFunctional/parallel/CertSync 1.71
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
119 TestFunctional/parallel/License 0.65
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
122 TestFunctional/parallel/MountCmd/any-port 11.71
123 TestFunctional/parallel/ProfileCmd/profile_list 0.35
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
128 TestFunctional/parallel/ServiceCmd/List 0.88
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
130 TestFunctional/parallel/MountCmd/specific-port 2.02
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
132 TestFunctional/parallel/ServiceCmd/Format 0.32
133 TestFunctional/parallel/ServiceCmd/URL 0.31
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
138 TestFunctional/parallel/ImageCommands/ImageBuild 3.6
139 TestFunctional/parallel/ImageCommands/Setup 2.07
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
141 TestFunctional/parallel/Version/short 0.08
142 TestFunctional/parallel/Version/components 0.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.24
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.24
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.07
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.01
156 TestFunctional/parallel/ImageCommands/ImageRemove 1.82
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.74
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.08
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 227.29
166 TestMultiControlPlane/serial/DeployApp 6.58
167 TestMultiControlPlane/serial/PingHostFromPods 1.45
168 TestMultiControlPlane/serial/AddWorkerNode 46.78
169 TestMultiControlPlane/serial/NodeLabels 0.08
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 13.78
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
187 TestJSONOutput/start/Command 96.87
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.73
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.7
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.43
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 93.3
219 TestMountStart/serial/StartWithMountFirst 26.1
220 TestMountStart/serial/VerifyMountFirst 0.38
221 TestMountStart/serial/StartWithMountSecond 26.6
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.9
224 TestMountStart/serial/VerifyMountPostDelete 0.38
225 TestMountStart/serial/Stop 1.34
226 TestMountStart/serial/RestartStopped 22.97
227 TestMountStart/serial/VerifyMountPostStop 0.4
230 TestMultiNode/serial/FreshStart2Nodes 105.91
231 TestMultiNode/serial/DeployApp2Nodes 7.42
232 TestMultiNode/serial/PingHostFrom2Pods 0.88
233 TestMultiNode/serial/AddNode 40.83
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.25
236 TestMultiNode/serial/CopyFile 7.61
237 TestMultiNode/serial/StopNode 2.49
238 TestMultiNode/serial/StartAfterStop 28.02
240 TestMultiNode/serial/DeleteNode 2.3
242 TestMultiNode/serial/RestartMultiNode 176.77
243 TestMultiNode/serial/ValidateNameConflict 44.21
250 TestScheduledStopUnix 116.76
254 TestRunningBinaryUpgrade 183.75
269 TestPause/serial/Start 124.5
274 TestNetworkPlugins/group/false 3.26
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
280 TestNoKubernetes/serial/StartWithK8s 113.77
281 TestStoppedBinaryUpgrade/Setup 2.3
282 TestStoppedBinaryUpgrade/Upgrade 188.76
283 TestNoKubernetes/serial/StartWithStopK8s 5.56
284 TestPause/serial/SecondStartNoReconfiguration 39.44
285 TestNoKubernetes/serial/Start 28.23
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
287 TestNoKubernetes/serial/ProfileList 1.12
288 TestNoKubernetes/serial/Stop 1.39
289 TestNoKubernetes/serial/StartNoArgs 44.16
290 TestPause/serial/Pause 0.7
291 TestPause/serial/VerifyStatus 0.25
292 TestPause/serial/Unpause 0.66
293 TestPause/serial/PauseAgain 0.92
294 TestPause/serial/DeletePaused 0.81
295 TestPause/serial/VerifyDeletedResources 0.28
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
297 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
301 TestStartStop/group/embed-certs/serial/FirstStart 110.64
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 123.42
304 TestStartStop/group/embed-certs/serial/DeployApp 11.33
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
311 TestStartStop/group/no-preload/serial/FirstStart 114.14
315 TestStartStop/group/embed-certs/serial/SecondStart 680.27
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 600.73
318 TestStartStop/group/no-preload/serial/DeployApp 10.3
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
321 TestStartStop/group/old-k8s-version/serial/Stop 1.43
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/no-preload/serial/SecondStart 434.97
335 TestStartStop/group/newest-cni/serial/FirstStart 57.97
336 TestNetworkPlugins/group/auto/Start 59.17
337 TestNetworkPlugins/group/kindnet/Start 86.15
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
340 TestStartStop/group/newest-cni/serial/Stop 10.83
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
342 TestStartStop/group/newest-cni/serial/SecondStart 69.07
343 TestNetworkPlugins/group/auto/KubeletFlags 0.29
344 TestNetworkPlugins/group/auto/NetCatPod 15.59
345 TestNetworkPlugins/group/auto/DNS 33.46
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
349 TestStartStop/group/newest-cni/serial/Pause 2.7
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/Start 94.7
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
353 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
354 TestNetworkPlugins/group/auto/Localhost 0.14
355 TestNetworkPlugins/group/auto/HairPin 0.14
356 TestNetworkPlugins/group/kindnet/DNS 0.2
357 TestNetworkPlugins/group/kindnet/Localhost 0.17
358 TestNetworkPlugins/group/kindnet/HairPin 0.15
359 TestNetworkPlugins/group/custom-flannel/Start 91.26
360 TestNetworkPlugins/group/enable-default-cni/Start 90.08
361 TestNetworkPlugins/group/flannel/Start 88.26
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.28
364 TestNetworkPlugins/group/calico/NetCatPod 11.26
365 TestNetworkPlugins/group/calico/DNS 0.18
366 TestNetworkPlugins/group/calico/Localhost 0.14
367 TestNetworkPlugins/group/calico/HairPin 0.15
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.47
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
372 TestNetworkPlugins/group/custom-flannel/DNS 0.38
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
375 TestNetworkPlugins/group/bridge/Start 61.01
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
381 TestNetworkPlugins/group/flannel/NetCatPod 11.24
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
383 TestNetworkPlugins/group/bridge/NetCatPod 11.26
384 TestNetworkPlugins/group/flannel/DNS 0.17
385 TestNetworkPlugins/group/flannel/Localhost 0.15
386 TestNetworkPlugins/group/flannel/HairPin 0.16
387 TestNetworkPlugins/group/bridge/DNS 0.16
388 TestNetworkPlugins/group/bridge/Localhost 0.13
389 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (25.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-502138 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-502138 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.470640721s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-502138
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-502138: exit status 85 (70.72019ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:55 UTC |          |
	|         | -p download-only-502138        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 05:55:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 05:55:55.088225   16087 out.go:291] Setting OutFile to fd 1 ...
	I0315 05:55:55.088595   16087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:55:55.088609   16087 out.go:304] Setting ErrFile to fd 2...
	I0315 05:55:55.088616   16087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:55:55.089041   16087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	W0315 05:55:55.089287   16087 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18213-8825/.minikube/config/config.json: open /home/jenkins/minikube-integration/18213-8825/.minikube/config/config.json: no such file or directory
	I0315 05:55:55.091074   16087 out.go:298] Setting JSON to true
	I0315 05:55:55.091909   16087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2251,"bootTime":1710479904,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 05:55:55.091981   16087 start.go:139] virtualization: kvm guest
	I0315 05:55:55.094240   16087 out.go:97] [download-only-502138] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 05:55:55.095836   16087 out.go:169] MINIKUBE_LOCATION=18213
	W0315 05:55:55.094356   16087 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball: no such file or directory
	I0315 05:55:55.094390   16087 notify.go:220] Checking for updates...
	I0315 05:55:55.098694   16087 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 05:55:55.100066   16087 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 05:55:55.101353   16087 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:55:55.102664   16087 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0315 05:55:55.105185   16087 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 05:55:55.105418   16087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 05:55:55.201392   16087 out.go:97] Using the kvm2 driver based on user configuration
	I0315 05:55:55.201431   16087 start.go:297] selected driver: kvm2
	I0315 05:55:55.201446   16087 start.go:901] validating driver "kvm2" against <nil>
	I0315 05:55:55.201756   16087 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:55:55.201860   16087 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 05:55:55.216290   16087 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 05:55:55.216340   16087 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 05:55:55.216836   16087 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0315 05:55:55.216976   16087 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 05:55:55.217033   16087 cni.go:84] Creating CNI manager for ""
	I0315 05:55:55.217046   16087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:55:55.217053   16087 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 05:55:55.217135   16087 start.go:340] cluster config:
	{Name:download-only-502138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-502138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 05:55:55.217310   16087 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:55:55.219016   16087 out.go:97] Downloading VM boot image ...
	I0315 05:55:55.219054   16087 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/iso/amd64/minikube-v1.32.1-1710459732-18213-amd64.iso
	I0315 05:56:03.894700   16087 out.go:97] Starting "download-only-502138" primary control-plane node in "download-only-502138" cluster
	I0315 05:56:03.894725   16087 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 05:56:03.990759   16087 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:03.990786   16087 cache.go:56] Caching tarball of preloaded images
	I0315 05:56:03.990949   16087 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 05:56:03.992717   16087 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0315 05:56:03.992735   16087 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:04.094183   16087 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:18.898219   16087 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:18.898304   16087 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:19.803378   16087 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 05:56:19.803711   16087 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-502138/config.json ...
	I0315 05:56:19.803741   16087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-502138/config.json: {Name:mkb50fe05cc757311f12a5484d1d5e216fffcd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:56:19.803893   16087 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 05:56:19.804091   16087 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-502138 host does not exist
	  To start a cluster, run: "minikube start -p download-only-502138"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-502138
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (23.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-396128 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-396128 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.419408951s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (23.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-396128
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-396128: exit status 85 (70.127244ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:55 UTC |                     |
	|         | -p download-only-502138        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| delete  | -p download-only-502138        | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| start   | -o=json --download-only        | download-only-396128 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC |                     |
	|         | -p download-only-396128        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 05:56:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 05:56:20.896376   16288 out.go:291] Setting OutFile to fd 1 ...
	I0315 05:56:20.896860   16288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:56:20.896920   16288 out.go:304] Setting ErrFile to fd 2...
	I0315 05:56:20.896940   16288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:56:20.897452   16288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 05:56:20.898668   16288 out.go:298] Setting JSON to true
	I0315 05:56:20.899440   16288 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2277,"bootTime":1710479904,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 05:56:20.899501   16288 start.go:139] virtualization: kvm guest
	I0315 05:56:20.901607   16288 out.go:97] [download-only-396128] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 05:56:20.902927   16288 out.go:169] MINIKUBE_LOCATION=18213
	I0315 05:56:20.901802   16288 notify.go:220] Checking for updates...
	I0315 05:56:20.905397   16288 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 05:56:20.906945   16288 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 05:56:20.908385   16288 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:56:20.909887   16288 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0315 05:56:20.912587   16288 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 05:56:20.912834   16288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 05:56:20.946124   16288 out.go:97] Using the kvm2 driver based on user configuration
	I0315 05:56:20.946153   16288 start.go:297] selected driver: kvm2
	I0315 05:56:20.946177   16288 start.go:901] validating driver "kvm2" against <nil>
	I0315 05:56:20.946524   16288 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:56:20.946612   16288 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 05:56:20.961762   16288 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 05:56:20.961840   16288 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 05:56:20.962529   16288 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0315 05:56:20.962712   16288 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 05:56:20.962792   16288 cni.go:84] Creating CNI manager for ""
	I0315 05:56:20.962809   16288 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:56:20.962821   16288 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 05:56:20.962908   16288 start.go:340] cluster config:
	{Name:download-only-396128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-396128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 05:56:20.963036   16288 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:56:20.964897   16288 out.go:97] Starting "download-only-396128" primary control-plane node in "download-only-396128" cluster
	I0315 05:56:20.964915   16288 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 05:56:21.102456   16288 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:21.102482   16288 cache.go:56] Caching tarball of preloaded images
	I0315 05:56:21.102642   16288 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 05:56:21.104538   16288 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0315 05:56:21.104561   16288 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:21.207025   16288 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:33.879514   16288 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:33.879605   16288 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:34.756012   16288 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 05:56:34.756349   16288 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-396128/config.json ...
	I0315 05:56:34.756377   16288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-396128/config.json: {Name:mk284ce4883f728febd0607d39ce4201dda300d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:56:34.756581   16288 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 05:56:34.756750   16288 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-396128 host does not exist
	  To start a cluster, run: "minikube start -p download-only-396128"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-396128
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (41.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-168992 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-168992 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (41.780985944s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (41.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-168992
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-168992: exit status 85 (70.817487ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:55 UTC |                     |
	|         | -p download-only-502138           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| delete  | -p download-only-502138           | download-only-502138 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| start   | -o=json --download-only           | download-only-396128 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC |                     |
	|         | -p download-only-396128           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| delete  | -p download-only-396128           | download-only-396128 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC | 15 Mar 24 05:56 UTC |
	| start   | -o=json --download-only           | download-only-168992 | jenkins | v1.32.0 | 15 Mar 24 05:56 UTC |                     |
	|         | -p download-only-168992           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 05:56:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 05:56:44.664640   16488 out.go:291] Setting OutFile to fd 1 ...
	I0315 05:56:44.664808   16488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:56:44.664818   16488 out.go:304] Setting ErrFile to fd 2...
	I0315 05:56:44.664824   16488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 05:56:44.665034   16488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 05:56:44.665604   16488 out.go:298] Setting JSON to true
	I0315 05:56:44.666423   16488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2301,"bootTime":1710479904,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 05:56:44.666489   16488 start.go:139] virtualization: kvm guest
	I0315 05:56:44.668708   16488 out.go:97] [download-only-168992] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 05:56:44.670332   16488 out.go:169] MINIKUBE_LOCATION=18213
	I0315 05:56:44.668870   16488 notify.go:220] Checking for updates...
	I0315 05:56:44.673241   16488 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 05:56:44.674570   16488 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 05:56:44.675837   16488 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 05:56:44.677108   16488 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0315 05:56:44.679439   16488 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 05:56:44.679693   16488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 05:56:44.710652   16488 out.go:97] Using the kvm2 driver based on user configuration
	I0315 05:56:44.710685   16488 start.go:297] selected driver: kvm2
	I0315 05:56:44.710697   16488 start.go:901] validating driver "kvm2" against <nil>
	I0315 05:56:44.710994   16488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:56:44.711081   16488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18213-8825/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 05:56:44.725262   16488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 05:56:44.725319   16488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 05:56:44.725764   16488 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0315 05:56:44.725912   16488 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 05:56:44.725972   16488 cni.go:84] Creating CNI manager for ""
	I0315 05:56:44.725984   16488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 05:56:44.725990   16488 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 05:56:44.726038   16488 start.go:340] cluster config:
	{Name:download-only-168992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-168992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 05:56:44.726121   16488 iso.go:125] acquiring lock: {Name:mk048c52a0305dbd9220f756a753e61a2b267b56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 05:56:44.727648   16488 out.go:97] Starting "download-only-168992" primary control-plane node in "download-only-168992" cluster
	I0315 05:56:44.727668   16488 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 05:56:44.826823   16488 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:44.826871   16488 cache.go:56] Caching tarball of preloaded images
	I0315 05:56:44.827054   16488 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 05:56:44.828915   16488 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0315 05:56:44.828930   16488 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:44.921008   16488 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0315 05:56:55.229983   16488 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:55.231003   16488 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-8825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 05:56:55.993382   16488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0315 05:56:55.993707   16488 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-168992/config.json ...
	I0315 05:56:55.993735   16488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/download-only-168992/config.json: {Name:mkc10f31f8c559e20be7add9379e52c0149fbd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 05:56:55.993884   16488 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 05:56:55.994003   16488 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18213-8825/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-168992 host does not exist
	  To start a cluster, run: "minikube start -p download-only-168992"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-168992
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-455686 --alsologtostderr --binary-mirror http://127.0.0.1:42939 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-455686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-455686
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (111.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-314098 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-314098 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m50.768886472s)
helpers_test.go:175: Cleaning up "offline-crio-314098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-314098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-314098: (1.032171907s)
--- PASS: TestOffline (111.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-480837
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-480837: exit status 85 (58.166019ms)

                                                
                                                
-- stdout --
	* Profile "addons-480837" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-480837"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-480837
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-480837: exit status 85 (60.308448ms)

                                                
                                                
-- stdout --
	* Profile "addons-480837" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-480837"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (150.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-480837 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-480837 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.767527549s)
--- PASS: TestAddons/Setup (150.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 28.997504ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ms4xl" [57ba4009-bd31-45f4-8d43-f0fe7246bac5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00612953s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hsttb" [fddd7b1b-3abb-4bd0-a7d4-a205d2b263b7] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010513126s
addons_test.go:340: (dbg) Run:  kubectl --context addons-480837 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-480837 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-480837 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.91137572s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 ip
2024/03/15 06:00:15 [DEBUG] GET http://192.168.39.159:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xg65k" [befacd39-3fa0-44c2-9de7-78317cf75324] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005249928s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-480837
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-480837: (6.864377544s)
--- PASS: TestAddons/parallel/InspektorGadget (12.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.88259ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-zb4x6" [79966cb5-86ce-4eae-9118-53d41994e123] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005064559s
addons_test.go:415: (dbg) Run:  kubectl --context addons-480837 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.627734ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-g6cbc" [779f003c-8e64-4909-ae2e-adaa744eaddf] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005479799s
addons_test.go:473: (dbg) Run:  kubectl --context addons-480837 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-480837 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.331452463s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.705222ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-480837 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-480837 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7547c2db-7cf4-4f60-bd9d-64618a161cee] Pending
helpers_test.go:344: "task-pv-pod" [7547c2db-7cf4-4f60-bd9d-64618a161cee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7547c2db-7cf4-4f60-bd9d-64618a161cee] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004600907s
addons_test.go:584: (dbg) Run:  kubectl --context addons-480837 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-480837 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-480837 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-480837 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-480837 delete pod task-pv-pod: (1.036135018s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-480837 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-480837 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-480837 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0e916c03-232a-4c31-95c8-0a7c4fe02e5c] Pending
helpers_test.go:344: "task-pv-pod-restore" [0e916c03-232a-4c31-95c8-0a7c4fe02e5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0e916c03-232a-4c31-95c8-0a7c4fe02e5c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003814065s
addons_test.go:626: (dbg) Run:  kubectl --context addons-480837 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-480837 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-480837 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-480837 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.811648586s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-480837 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-480837 --alsologtostderr -v=1: (1.420714906s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-bj5g6" [e00a2bd4-e141-4a45-9177-28f32d939937] Pending
helpers_test.go:344: "headlamp-5485c556b-bj5g6" [e00a2bd4-e141-4a45-9177-28f32d939937] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-bj5g6" [e00a2bd4-e141-4a45-9177-28f32d939937] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.005050619s
--- PASS: TestAddons/parallel/Headlamp (19.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-b6wbj" [63576374-3efb-4921-adae-eb874b165575] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003382739s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-480837
--- PASS: TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.67s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-480837 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-480837 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0d8803f3-abc4-441e-b967-2c9cfc543b56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0d8803f3-abc4-441e-b967-2c9cfc543b56] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0d8803f3-abc4-441e-b967-2c9cfc543b56] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.005272504s
addons_test.go:891: (dbg) Run:  kubectl --context addons-480837 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 ssh "cat /opt/local-path-provisioner/pvc-32155ee8-605a-4b28-a7c9-57ea10158efb_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-480837 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-480837 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-480837 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-480837 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.736743305s)
--- PASS: TestAddons/parallel/LocalPath (59.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bkftz" [e697891a-18dc-4004-8601-eff9e689acb4] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005602819s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-480837
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-dw7wt" [a0138eb0-3436-42ac-afab-58d7218d3d8b] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010828041s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-480837 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-480837 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (58.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-559541 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-559541 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.098763359s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-559541 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-559541 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-559541 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-559541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-559541
--- PASS: TestCertOptions (58.56s)

                                                
                                    
x
+
TestCertExpiration (285.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-266938 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-266938 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.63377141s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-266938 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-266938 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.937001068s)
helpers_test.go:175: Cleaning up "cert-expiration-266938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-266938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-266938: (1.02194515s)
--- PASS: TestCertExpiration (285.59s)

                                                
                                    
x
+
TestForceSystemdFlag (73.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-613029 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-613029 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.547298556s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-613029 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-613029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-613029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-613029: (1.989181869s)
--- PASS: TestForceSystemdFlag (73.82s)

                                                
                                    
x
+
TestForceSystemdEnv (62.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-397316 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-397316 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.277019864s)
helpers_test.go:175: Cleaning up "force-systemd-env-397316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-397316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-397316: (1.081845122s)
--- PASS: TestForceSystemdEnv (62.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.50s)

                                                
                                    
x
+
TestErrorSpam/setup (45.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-072790 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-072790 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-072790 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-072790 --driver=kvm2  --container-runtime=crio: (45.355688645s)
--- PASS: TestErrorSpam/setup (45.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop: (2.29510875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop: (1.39384667s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-072790 --log_dir /tmp/nospam-072790 stop: (2.074886906s)
--- PASS: TestErrorSpam/stop (5.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18213-8825/.minikube/files/etc/test/nested/copy/16075/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-380088 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.029307079s)
--- PASS: TestFunctional/serial/StartWithProxy (59.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-380088 --alsologtostderr -v=8: (39.29551756s)
functional_test.go:659: soft start took 39.296187742s for "functional-380088" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-380088 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:3.1: (1.082084921s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:3.3: (1.130816571s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 cache add registry.k8s.io/pause:latest: (1.123708391s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-380088 /tmp/TestFunctionalserialCacheCmdcacheadd_local3333372354/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache add minikube-local-cache-test:functional-380088
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 cache add minikube-local-cache-test:functional-380088: (1.872817567s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache delete minikube-local-cache-test:functional-380088
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-380088
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.125187ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 kubectl -- --context functional-380088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-380088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-380088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.305258686s)
functional_test.go:757: restart took 32.305394007s for "functional-380088" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-380088 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 logs: (1.60847084s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 logs --file /tmp/TestFunctionalserialLogsFileCmd2024770062/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 logs --file /tmp/TestFunctionalserialLogsFileCmd2024770062/001/logs.txt: (1.655676465s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-380088 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-380088
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-380088: exit status 115 (292.569097ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.5:31968 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-380088 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 config get cpus: exit status 14 (66.21864ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 config get cpus: exit status 14 (62.997117ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380088 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380088 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23320: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-380088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.130181ms)

                                                
                                                
-- stdout --
	* [functional-380088] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:09:22.680365   22904 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:09:22.680498   22904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:09:22.680508   22904 out.go:304] Setting ErrFile to fd 2...
	I0315 06:09:22.680514   22904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:09:22.680768   22904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:09:22.681501   22904 out.go:298] Setting JSON to false
	I0315 06:09:22.682484   22904 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3059,"bootTime":1710479904,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:09:22.682561   22904 start.go:139] virtualization: kvm guest
	I0315 06:09:22.684821   22904 out.go:177] * [functional-380088] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 06:09:22.687058   22904 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:09:22.687145   22904 notify.go:220] Checking for updates...
	I0315 06:09:22.689682   22904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:09:22.691487   22904 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:09:22.693075   22904 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:09:22.694464   22904 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:09:22.695898   22904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:09:22.698014   22904 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:09:22.698644   22904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:09:22.698700   22904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:09:22.714848   22904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0315 06:09:22.715388   22904 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:09:22.716012   22904 main.go:141] libmachine: Using API Version  1
	I0315 06:09:22.716044   22904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:09:22.716439   22904 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:09:22.716705   22904 main.go:141] libmachine: (functional-380088) Calling .DriverName
	I0315 06:09:22.716949   22904 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:09:22.717294   22904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:09:22.717347   22904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:09:22.734661   22904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35219
	I0315 06:09:22.735045   22904 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:09:22.735555   22904 main.go:141] libmachine: Using API Version  1
	I0315 06:09:22.735580   22904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:09:22.735988   22904 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:09:22.736276   22904 main.go:141] libmachine: (functional-380088) Calling .DriverName
	I0315 06:09:22.771113   22904 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 06:09:22.772782   22904 start.go:297] selected driver: kvm2
	I0315 06:09:22.772797   22904 start.go:901] validating driver "kvm2" against &{Name:functional-380088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-380088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:09:22.772938   22904 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:09:22.775206   22904 out.go:177] 
	W0315 06:09:22.776487   22904 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0315 06:09:22.777903   22904 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-380088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-380088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (178.566834ms)

                                                
                                                
-- stdout --
	* [functional-380088] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:09:22.518402   22853 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:09:22.518631   22853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:09:22.518639   22853 out.go:304] Setting ErrFile to fd 2...
	I0315 06:09:22.518644   22853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:09:22.518944   22853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:09:22.519451   22853 out.go:298] Setting JSON to false
	I0315 06:09:22.520358   22853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3059,"bootTime":1710479904,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 06:09:22.520423   22853 start.go:139] virtualization: kvm guest
	I0315 06:09:22.523423   22853 out.go:177] * [functional-380088] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0315 06:09:22.525015   22853 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 06:09:22.524983   22853 notify.go:220] Checking for updates...
	I0315 06:09:22.527079   22853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 06:09:22.528375   22853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 06:09:22.529658   22853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 06:09:22.531296   22853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 06:09:22.533151   22853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 06:09:22.535230   22853 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:09:22.535823   22853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:09:22.535876   22853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:09:22.557839   22853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0315 06:09:22.558317   22853 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:09:22.559003   22853 main.go:141] libmachine: Using API Version  1
	I0315 06:09:22.559020   22853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:09:22.559423   22853 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:09:22.559692   22853 main.go:141] libmachine: (functional-380088) Calling .DriverName
	I0315 06:09:22.560031   22853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 06:09:22.560555   22853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:09:22.560622   22853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:09:22.577252   22853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0315 06:09:22.577662   22853 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:09:22.578151   22853 main.go:141] libmachine: Using API Version  1
	I0315 06:09:22.578172   22853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:09:22.578452   22853 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:09:22.578593   22853 main.go:141] libmachine: (functional-380088) Calling .DriverName
	I0315 06:09:22.614394   22853 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0315 06:09:22.615798   22853 start.go:297] selected driver: kvm2
	I0315 06:09:22.615816   22853 start.go:901] validating driver "kvm2" against &{Name:functional-380088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18213/minikube-v1.32.1-1710459732-18213-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-380088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 06:09:22.615941   22853 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 06:09:22.618756   22853 out.go:177] 
	W0315 06:09:22.620343   22853 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0315 06:09:22.621888   22853 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-380088 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-380088 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-gr8wb" [c190896f-d14b-4429-9324-5ff32de25f8e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-gr8wb" [c190896f-d14b-4429-9324-5ff32de25f8e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004580959s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.5:31750
functional_test.go:1671: http://192.168.39.5:31750: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-gr8wb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.5:31750
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f1a52be1-68e5-4c17-befa-f14c9523cf63] Running
2024/03/15 06:09:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005131477s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-380088 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-380088 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-380088 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-380088 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380088 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e64b7f6b-9d6c-4cf8-b4a6-7c6e54922365] Pending
helpers_test.go:344: "sp-pod" [e64b7f6b-9d6c-4cf8-b4a6-7c6e54922365] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e64b7f6b-9d6c-4cf8-b4a6-7c6e54922365] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003837528s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-380088 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-380088 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-380088 delete -f testdata/storage-provisioner/pod.yaml: (1.014748331s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380088 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ffb8eb9-9b4b-439d-a539-fafb9120cf1d] Pending
helpers_test.go:344: "sp-pod" [2ffb8eb9-9b4b-439d-a539-fafb9120cf1d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ffb8eb9-9b4b-439d-a539-fafb9120cf1d] Running
E0315 06:10:19.013639   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004116786s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-380088 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh -n functional-380088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cp functional-380088:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1960241351/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh -n functional-380088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh -n functional-380088 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (38.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-380088 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-p5lkk" [0abe8020-38f5-441f-b1e1-e975bd736829] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-p5lkk" [0abe8020-38f5-441f-b1e1-e975bd736829] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.023233959s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380088 exec mysql-859648c796-p5lkk -- mysql -ppassword -e "show databases;"
E0315 06:10:01.091931   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-380088 exec mysql-859648c796-p5lkk -- mysql -ppassword -e "show databases;": exit status 1 (329.544549ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380088 exec mysql-859648c796-p5lkk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-380088 exec mysql-859648c796-p5lkk -- mysql -ppassword -e "show databases;": exit status 1 (153.264717ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380088 exec mysql-859648c796-p5lkk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (38.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16075/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /etc/test/nested/copy/16075/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16075.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /etc/ssl/certs/16075.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16075.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /usr/share/ca-certificates/16075.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/160752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /etc/ssl/certs/160752.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/160752.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /usr/share/ca-certificates/160752.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-380088 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "sudo systemctl is-active docker": exit status 1 (297.969147ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "sudo systemctl is-active containerd": exit status 1 (266.090699ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-380088 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-380088 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gx22x" [959d28ae-e332-4d2e-bf6f-fc84c1051e2a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gx22x" [959d28ae-e332-4d2e-bf6f-fc84c1051e2a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004771381s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdany-port3809931895/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710482961497558175" to /tmp/TestFunctionalparallelMountCmdany-port3809931895/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710482961497558175" to /tmp/TestFunctionalparallelMountCmdany-port3809931895/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710482961497558175" to /tmp/TestFunctionalparallelMountCmdany-port3809931895/001/test-1710482961497558175
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.354562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 15 06:09 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 15 06:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 15 06:09 test-1710482961497558175
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh cat /mount-9p/test-1710482961497558175
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-380088 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4709c707-a3d1-4207-9c51-1760142c5b36] Pending
helpers_test.go:344: "busybox-mount" [4709c707-a3d1-4207-9c51-1760142c5b36] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4709c707-a3d1-4207-9c51-1760142c5b36] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4709c707-a3d1-4207-9c51-1760142c5b36] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005586855s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-380088 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdany-port3809931895/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "284.315439ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.834133ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "356.600747ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "75.495283ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service list -o json
functional_test.go:1490: Took "841.960787ms" to run "out/minikube-linux-amd64 -p functional-380088 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdspecific-port1776238799/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.619013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdspecific-port1776238799/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "sudo umount -f /mount-9p": exit status 1 (220.674029ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-380088 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdspecific-port1776238799/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.5:32166
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.5:32166
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380088 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-380088
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-380088
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380088 image ls --format short --alsologtostderr:
I0315 06:10:06.084863   24819 out.go:291] Setting OutFile to fd 1 ...
I0315 06:10:06.084966   24819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.084971   24819 out.go:304] Setting ErrFile to fd 2...
I0315 06:10:06.084976   24819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.085260   24819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
I0315 06:10:06.085896   24819 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.086000   24819 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.086437   24819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.086479   24819 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.102979   24819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
I0315 06:10:06.103445   24819 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.104057   24819 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.104086   24819 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.104430   24819 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.104673   24819 main.go:141] libmachine: (functional-380088) Calling .GetState
I0315 06:10:06.106430   24819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.106472   24819 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.121513   24819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
I0315 06:10:06.122002   24819 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.122518   24819 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.122537   24819 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.122798   24819 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.122947   24819 main.go:141] libmachine: (functional-380088) Calling .DriverName
I0315 06:10:06.123185   24819 ssh_runner.go:195] Run: systemctl --version
I0315 06:10:06.123208   24819 main.go:141] libmachine: (functional-380088) Calling .GetSSHHostname
I0315 06:10:06.126116   24819 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.126548   24819 main.go:141] libmachine: (functional-380088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b6:6b", ip: ""} in network mk-functional-380088: {Iface:virbr1 ExpiryTime:2024-03-15 07:07:09 +0000 UTC Type:0 Mac:52:54:00:a8:b6:6b Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:functional-380088 Clientid:01:52:54:00:a8:b6:6b}
I0315 06:10:06.126578   24819 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined IP address 192.168.39.5 and MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.126840   24819 main.go:141] libmachine: (functional-380088) Calling .GetSSHPort
I0315 06:10:06.127015   24819 main.go:141] libmachine: (functional-380088) Calling .GetSSHKeyPath
I0315 06:10:06.127153   24819 main.go:141] libmachine: (functional-380088) Calling .GetSSHUsername
I0315 06:10:06.127291   24819 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/functional-380088/id_rsa Username:docker}
I0315 06:10:06.211266   24819 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 06:10:06.282297   24819 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.282312   24819 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.282743   24819 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:06.282766   24819 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.282807   24819 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:06.282833   24819 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.282845   24819 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.283143   24819 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.283138   24819 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:06.283158   24819 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380088 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-380088  | d6cc0683de69f | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-380088  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380088 image ls --format table --alsologtostderr:
I0315 06:10:06.371440   24908 out.go:291] Setting OutFile to fd 1 ...
I0315 06:10:06.371555   24908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.371562   24908 out.go:304] Setting ErrFile to fd 2...
I0315 06:10:06.371566   24908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.371776   24908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
I0315 06:10:06.372326   24908 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.372416   24908 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.374368   24908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.374409   24908 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.391068   24908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
I0315 06:10:06.391565   24908 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.392209   24908 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.392236   24908 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.392599   24908 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.392788   24908 main.go:141] libmachine: (functional-380088) Calling .GetState
I0315 06:10:06.394857   24908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.394885   24908 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.410203   24908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46017
I0315 06:10:06.410656   24908 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.411250   24908 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.411276   24908 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.411645   24908 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.411860   24908 main.go:141] libmachine: (functional-380088) Calling .DriverName
I0315 06:10:06.412082   24908 ssh_runner.go:195] Run: systemctl --version
I0315 06:10:06.412106   24908 main.go:141] libmachine: (functional-380088) Calling .GetSSHHostname
I0315 06:10:06.415268   24908 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.415679   24908 main.go:141] libmachine: (functional-380088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b6:6b", ip: ""} in network mk-functional-380088: {Iface:virbr1 ExpiryTime:2024-03-15 07:07:09 +0000 UTC Type:0 Mac:52:54:00:a8:b6:6b Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:functional-380088 Clientid:01:52:54:00:a8:b6:6b}
I0315 06:10:06.415759   24908 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined IP address 192.168.39.5 and MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.415965   24908 main.go:141] libmachine: (functional-380088) Calling .GetSSHPort
I0315 06:10:06.416127   24908 main.go:141] libmachine: (functional-380088) Calling .GetSSHKeyPath
I0315 06:10:06.416280   24908 main.go:141] libmachine: (functional-380088) Calling .GetSSHUsername
I0315 06:10:06.416419   24908 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/functional-380088/id_rsa Username:docker}
I0315 06:10:06.512588   24908 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 06:10:06.582910   24908 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.582927   24908 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.583241   24908 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.583251   24908 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:06.583257   24908 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:06.583285   24908 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.583292   24908 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.583574   24908 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:06.583629   24908 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.583664   24908 main.go:141] libmachine: Making call to close connection to plugin binary
E0315 06:10:08.772766   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380088 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d6cc0683de69f7c93bbee64b25241a0b8641dc248db0f2393359ca1c985b3820","repoDigests":["localhost/minikube-local-cache-test@sha256:436d64bacdd9dbecca49f641654a5dd6077432e13a42f566b5ad929fe72ac437"],"repoTags":["localhost/minikube-local-cache-test:functional-380088"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDiges
ts":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-380088"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e9
9447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha25
6:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217
b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/d
ashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e
63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380088 image ls --format json --alsologtostderr:
I0315 06:10:06.358321   24902 out.go:291] Setting OutFile to fd 1 ...
I0315 06:10:06.358491   24902 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.358505   24902 out.go:304] Setting ErrFile to fd 2...
I0315 06:10:06.358512   24902 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.358869   24902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
I0315 06:10:06.359804   24902 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.359956   24902 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.360540   24902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.360607   24902 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.375765   24902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
I0315 06:10:06.376175   24902 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.376728   24902 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.376746   24902 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.377246   24902 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.377485   24902 main.go:141] libmachine: (functional-380088) Calling .GetState
I0315 06:10:06.379530   24902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.379582   24902 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.393979   24902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35039
I0315 06:10:06.394611   24902 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.395259   24902 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.395277   24902 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.395712   24902 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.395875   24902 main.go:141] libmachine: (functional-380088) Calling .DriverName
I0315 06:10:06.396083   24902 ssh_runner.go:195] Run: systemctl --version
I0315 06:10:06.396105   24902 main.go:141] libmachine: (functional-380088) Calling .GetSSHHostname
I0315 06:10:06.398517   24902 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.398881   24902 main.go:141] libmachine: (functional-380088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b6:6b", ip: ""} in network mk-functional-380088: {Iface:virbr1 ExpiryTime:2024-03-15 07:07:09 +0000 UTC Type:0 Mac:52:54:00:a8:b6:6b Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:functional-380088 Clientid:01:52:54:00:a8:b6:6b}
I0315 06:10:06.398905   24902 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined IP address 192.168.39.5 and MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.398982   24902 main.go:141] libmachine: (functional-380088) Calling .GetSSHPort
I0315 06:10:06.399142   24902 main.go:141] libmachine: (functional-380088) Calling .GetSSHKeyPath
I0315 06:10:06.399300   24902 main.go:141] libmachine: (functional-380088) Calling .GetSSHUsername
I0315 06:10:06.399411   24902 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/functional-380088/id_rsa Username:docker}
I0315 06:10:06.479388   24902 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 06:10:06.541167   24902 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.541183   24902 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.541484   24902 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.541501   24902 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:06.541515   24902 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.541523   24902 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.541786   24902 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:06.541825   24902 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.541837   24902 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380088 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-380088
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: d6cc0683de69f7c93bbee64b25241a0b8641dc248db0f2393359ca1c985b3820
repoDigests:
- localhost/minikube-local-cache-test@sha256:436d64bacdd9dbecca49f641654a5dd6077432e13a42f566b5ad929fe72ac437
repoTags:
- localhost/minikube-local-cache-test:functional-380088
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380088 image ls --format yaml --alsologtostderr:
I0315 06:10:06.092839   24817 out.go:291] Setting OutFile to fd 1 ...
I0315 06:10:06.093021   24817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.093046   24817 out.go:304] Setting ErrFile to fd 2...
I0315 06:10:06.093061   24817 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.093395   24817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
I0315 06:10:06.094238   24817 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.094429   24817 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.095001   24817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.095082   24817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.113608   24817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
I0315 06:10:06.114161   24817 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.114580   24817 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.114595   24817 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.114837   24817 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.114937   24817 main.go:141] libmachine: (functional-380088) Calling .GetState
I0315 06:10:06.116622   24817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.116662   24817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.130619   24817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
I0315 06:10:06.131020   24817 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.131541   24817 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.131565   24817 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.131866   24817 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.132028   24817 main.go:141] libmachine: (functional-380088) Calling .DriverName
I0315 06:10:06.132205   24817 ssh_runner.go:195] Run: systemctl --version
I0315 06:10:06.132222   24817 main.go:141] libmachine: (functional-380088) Calling .GetSSHHostname
I0315 06:10:06.134802   24817 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.135132   24817 main.go:141] libmachine: (functional-380088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b6:6b", ip: ""} in network mk-functional-380088: {Iface:virbr1 ExpiryTime:2024-03-15 07:07:09 +0000 UTC Type:0 Mac:52:54:00:a8:b6:6b Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:functional-380088 Clientid:01:52:54:00:a8:b6:6b}
I0315 06:10:06.135172   24817 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined IP address 192.168.39.5 and MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.135260   24817 main.go:141] libmachine: (functional-380088) Calling .GetSSHPort
I0315 06:10:06.135396   24817 main.go:141] libmachine: (functional-380088) Calling .GetSSHKeyPath
I0315 06:10:06.135542   24817 main.go:141] libmachine: (functional-380088) Calling .GetSSHUsername
I0315 06:10:06.135671   24817 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/functional-380088/id_rsa Username:docker}
I0315 06:10:06.219975   24817 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 06:10:06.292413   24817 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.292428   24817 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.292726   24817 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.292743   24817 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:06.292760   24817 main.go:141] libmachine: Making call to close driver server
I0315 06:10:06.292769   24817 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:06.292963   24817 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:06.292982   24817 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh pgrep buildkitd: exit status 1 (245.957163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image build -t localhost/my-image:functional-380088 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image build -t localhost/my-image:functional-380088 testdata/build --alsologtostderr: (3.12432143s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-380088 image build -t localhost/my-image:functional-380088 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 40afb890ccd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-380088
--> ff0a12e8ce9
Successfully tagged localhost/my-image:functional-380088
ff0a12e8ce921d3ede66aab822d7543cc2ecf92878e4ed61cdb80c91c60a0d0b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-380088 image build -t localhost/my-image:functional-380088 testdata/build --alsologtostderr:
I0315 06:10:06.330368   24892 out.go:291] Setting OutFile to fd 1 ...
I0315 06:10:06.330752   24892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.330766   24892 out.go:304] Setting ErrFile to fd 2...
I0315 06:10:06.330772   24892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 06:10:06.331037   24892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
I0315 06:10:06.331796   24892 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.332475   24892 config.go:182] Loaded profile config "functional-380088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 06:10:06.332976   24892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.333016   24892 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.350437   24892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
I0315 06:10:06.351005   24892 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.351640   24892 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.351658   24892 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.352042   24892 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.352274   24892 main.go:141] libmachine: (functional-380088) Calling .GetState
I0315 06:10:06.354192   24892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 06:10:06.354229   24892 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 06:10:06.372307   24892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
I0315 06:10:06.372716   24892 main.go:141] libmachine: () Calling .GetVersion
I0315 06:10:06.374310   24892 main.go:141] libmachine: Using API Version  1
I0315 06:10:06.374325   24892 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 06:10:06.374776   24892 main.go:141] libmachine: () Calling .GetMachineName
I0315 06:10:06.374966   24892 main.go:141] libmachine: (functional-380088) Calling .DriverName
I0315 06:10:06.375164   24892 ssh_runner.go:195] Run: systemctl --version
I0315 06:10:06.375186   24892 main.go:141] libmachine: (functional-380088) Calling .GetSSHHostname
I0315 06:10:06.377930   24892 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.378502   24892 main.go:141] libmachine: (functional-380088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:b6:6b", ip: ""} in network mk-functional-380088: {Iface:virbr1 ExpiryTime:2024-03-15 07:07:09 +0000 UTC Type:0 Mac:52:54:00:a8:b6:6b Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:functional-380088 Clientid:01:52:54:00:a8:b6:6b}
I0315 06:10:06.378537   24892 main.go:141] libmachine: (functional-380088) DBG | domain functional-380088 has defined IP address 192.168.39.5 and MAC address 52:54:00:a8:b6:6b in network mk-functional-380088
I0315 06:10:06.378828   24892 main.go:141] libmachine: (functional-380088) Calling .GetSSHPort
I0315 06:10:06.379000   24892 main.go:141] libmachine: (functional-380088) Calling .GetSSHKeyPath
I0315 06:10:06.379160   24892 main.go:141] libmachine: (functional-380088) Calling .GetSSHUsername
I0315 06:10:06.379287   24892 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/functional-380088/id_rsa Username:docker}
I0315 06:10:06.459285   24892 build_images.go:161] Building image from path: /tmp/build.3154429565.tar
I0315 06:10:06.459364   24892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0315 06:10:06.470550   24892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3154429565.tar
I0315 06:10:06.475573   24892 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3154429565.tar: stat -c "%s %y" /var/lib/minikube/build/build.3154429565.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3154429565.tar': No such file or directory
I0315 06:10:06.475605   24892 ssh_runner.go:362] scp /tmp/build.3154429565.tar --> /var/lib/minikube/build/build.3154429565.tar (3072 bytes)
I0315 06:10:06.505624   24892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3154429565
I0315 06:10:06.537951   24892 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3154429565 -xf /var/lib/minikube/build/build.3154429565.tar
I0315 06:10:06.551659   24892 crio.go:297] Building image: /var/lib/minikube/build/build.3154429565
I0315 06:10:06.551722   24892 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-380088 /var/lib/minikube/build/build.3154429565 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0315 06:10:09.355786   24892 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-380088 /var/lib/minikube/build/build.3154429565 --cgroup-manager=cgroupfs: (2.804035922s)
I0315 06:10:09.355867   24892 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3154429565
I0315 06:10:09.367849   24892 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3154429565.tar
I0315 06:10:09.382172   24892 build_images.go:217] Built localhost/my-image:functional-380088 from /tmp/build.3154429565.tar
I0315 06:10:09.382215   24892 build_images.go:133] succeeded building to: functional-380088
I0315 06:10:09.382222   24892 build_images.go:134] failed building to: 
I0315 06:10:09.382282   24892 main.go:141] libmachine: Making call to close driver server
I0315 06:10:09.382316   24892 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:09.382610   24892 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
I0315 06:10:09.382627   24892 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:09.382651   24892 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:09.382661   24892 main.go:141] libmachine: Making call to close driver server
I0315 06:10:09.382670   24892 main.go:141] libmachine: (functional-380088) Calling .Close
I0315 06:10:09.382903   24892 main.go:141] libmachine: Successfully made call to close driver server
I0315 06:10:09.382922   24892 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 06:10:09.382903   24892 main.go:141] libmachine: (functional-380088) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.047500822s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-380088
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T" /mount1: exit status 1 (283.687258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-380088 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-380088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1094447633/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr: (4.958478617s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr: (2.785636901s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.285285747s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-380088
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image load --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr: (9.394611775s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image save gcr.io/google-containers/addon-resizer:functional-380088 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0315 06:09:58.532672   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.538725   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.549050   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.569378   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.609729   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.690134   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:58.850555   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:59.170727   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:09:59.811017   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image save gcr.io/google-containers/addon-resizer:functional-380088 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.012022832s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image rm gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image rm gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr: (1.570598223s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0315 06:10:03.652135   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.513327869s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-380088
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-380088 image save --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-380088 image save --daemon gcr.io/google-containers/addon-resizer:functional-380088 --alsologtostderr: (1.052435367s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-380088
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-380088
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-380088
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-380088
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (227.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-866665 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 06:10:39.494068   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:11:20.454791   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:12:42.376023   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-866665 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m46.544613658s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (227.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-866665 -- rollout status deployment/busybox: (4.077019244s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-82knb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-sdxnc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-xc5x4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-82knb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-sdxnc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-xc5x4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-82knb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-sdxnc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-xc5x4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-82knb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-82knb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-sdxnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-sdxnc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-xc5x4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-866665 -- exec busybox-5b5d89c9d6-xc5x4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-866665 -v=7 --alsologtostderr
E0315 06:14:21.072156   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.077451   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.087776   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.108105   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.148439   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.228815   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.389303   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:21.710287   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:22.351403   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:23.631654   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:26.192557   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:31.313696   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:41.553990   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:14:58.532650   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 06:15:02.034221   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-866665 -v=7 --alsologtostderr: (45.895473769s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-866665 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp testdata/cp-test.txt ha-866665:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665:/home/docker/cp-test.txt ha-866665-m02:/home/docker/cp-test_ha-866665_ha-866665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test_ha-866665_ha-866665-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665:/home/docker/cp-test.txt ha-866665-m03:/home/docker/cp-test_ha-866665_ha-866665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test_ha-866665_ha-866665-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665:/home/docker/cp-test.txt ha-866665-m04:/home/docker/cp-test_ha-866665_ha-866665-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test_ha-866665_ha-866665-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp testdata/cp-test.txt ha-866665-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m02:/home/docker/cp-test.txt ha-866665:/home/docker/cp-test_ha-866665-m02_ha-866665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test_ha-866665-m02_ha-866665.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m02:/home/docker/cp-test.txt ha-866665-m03:/home/docker/cp-test_ha-866665-m02_ha-866665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test_ha-866665-m02_ha-866665-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m02:/home/docker/cp-test.txt ha-866665-m04:/home/docker/cp-test_ha-866665-m02_ha-866665-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test_ha-866665-m02_ha-866665-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp testdata/cp-test.txt ha-866665-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt ha-866665:/home/docker/cp-test_ha-866665-m03_ha-866665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test_ha-866665-m03_ha-866665.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt ha-866665-m02:/home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test_ha-866665-m03_ha-866665-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m03:/home/docker/cp-test.txt ha-866665-m04:/home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test_ha-866665-m03_ha-866665-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp testdata/cp-test.txt ha-866665-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3271648541/001/cp-test_ha-866665-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt ha-866665:/home/docker/cp-test_ha-866665-m04_ha-866665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665 "sudo cat /home/docker/cp-test_ha-866665-m04_ha-866665.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt ha-866665-m02:/home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m02 "sudo cat /home/docker/cp-test_ha-866665-m04_ha-866665-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 cp ha-866665-m04:/home/docker/cp-test.txt ha-866665-m03:/home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-866665 ssh -n ha-866665-m03 "sudo cat /home/docker/cp-test_ha-866665-m04_ha-866665-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.487193671s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-729609 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-729609 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.870374961s)
--- PASS: TestJSONOutput/start/Command (96.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-729609 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-729609 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-729609 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-729609 --output=json --user=testUser: (7.426331593s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-771117 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-771117 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.880748ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"624b7d40-30bb-4086-8cf3-1fc2c8fd4e4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-771117] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b88c6eb1-ad22-4b9c-b54f-dc97cb3cb02f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18213"}}
	{"specversion":"1.0","id":"c444d35c-294b-4bb2-a928-31418575c84b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2f353f0-506b-4dc0-adc8-5fe4ba7f8386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig"}}
	{"specversion":"1.0","id":"d6fb44d9-b002-4483-b955-c4adce5d4773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube"}}
	{"specversion":"1.0","id":"7733bdc4-1676-4ced-baf4-34f826c8b1b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c592067f-c598-41a2-8da7-3439d9c2e4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"40ff71f6-52bc-439d-b9c8-e8894668c667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-771117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-771117
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-666501 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-666501 --driver=kvm2  --container-runtime=crio: (44.862334003s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-669632 --driver=kvm2  --container-runtime=crio
E0315 06:39:21.072085   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:39:58.532640   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-669632 --driver=kvm2  --container-runtime=crio: (45.510279417s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-666501
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-669632
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-669632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-669632
helpers_test.go:175: Cleaning up "first-666501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-666501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-666501: (1.019405982s)
--- PASS: TestMinikubeProfile (93.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-481257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-481257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.095316877s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-481257 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-481257 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-493536 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-493536 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.601143611s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-481257 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-493536
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-493536: (1.338914757s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-493536
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-493536: (21.974310371s)
--- PASS: TestMountStart/serial/RestartStopped (22.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-493536 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-763469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 06:43:01.577160   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-763469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.47755967s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-763469 -- rollout status deployment/busybox: (5.695901091s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-ktsnt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-tsdl7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-ktsnt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-tsdl7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-ktsnt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-tsdl7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-ktsnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-ktsnt -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-tsdl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-763469 -- exec busybox-5b5d89c9d6-tsdl7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-763469 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-763469 -v 3 --alsologtostderr: (40.253858571s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-763469 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp testdata/cp-test.txt multinode-763469:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469:/home/docker/cp-test.txt multinode-763469-m02:/home/docker/cp-test_multinode-763469_multinode-763469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test_multinode-763469_multinode-763469-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469:/home/docker/cp-test.txt multinode-763469-m03:/home/docker/cp-test_multinode-763469_multinode-763469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test_multinode-763469_multinode-763469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp testdata/cp-test.txt multinode-763469-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt multinode-763469:/home/docker/cp-test_multinode-763469-m02_multinode-763469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test_multinode-763469-m02_multinode-763469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m02:/home/docker/cp-test.txt multinode-763469-m03:/home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test_multinode-763469-m02_multinode-763469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp testdata/cp-test.txt multinode-763469-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3819273629/001/cp-test_multinode-763469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt multinode-763469:/home/docker/cp-test_multinode-763469-m03_multinode-763469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469 "sudo cat /home/docker/cp-test_multinode-763469-m03_multinode-763469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 cp multinode-763469-m03:/home/docker/cp-test.txt multinode-763469-m02:/home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 ssh -n multinode-763469-m02 "sudo cat /home/docker/cp-test_multinode-763469-m03_multinode-763469-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-763469 node stop m03: (1.579566893s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-763469 status: exit status 7 (452.496352ms)

                                                
                                                
-- stdout --
	multinode-763469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-763469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-763469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr: exit status 7 (461.874036ms)

                                                
                                                
-- stdout --
	multinode-763469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-763469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-763469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 06:44:09.144689   41012 out.go:291] Setting OutFile to fd 1 ...
	I0315 06:44:09.144863   41012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:44:09.144881   41012 out.go:304] Setting ErrFile to fd 2...
	I0315 06:44:09.144892   41012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 06:44:09.145719   41012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 06:44:09.146065   41012 out.go:298] Setting JSON to false
	I0315 06:44:09.146106   41012 mustload.go:65] Loading cluster: multinode-763469
	I0315 06:44:09.146380   41012 notify.go:220] Checking for updates...
	I0315 06:44:09.147170   41012 config.go:182] Loaded profile config "multinode-763469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 06:44:09.147197   41012 status.go:255] checking status of multinode-763469 ...
	I0315 06:44:09.147794   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.147868   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.164576   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33715
	I0315 06:44:09.164991   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.165511   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.165537   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.166024   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.166256   41012 main.go:141] libmachine: (multinode-763469) Calling .GetState
	I0315 06:44:09.167845   41012 status.go:330] multinode-763469 host status = "Running" (err=<nil>)
	I0315 06:44:09.167876   41012 host.go:66] Checking if "multinode-763469" exists ...
	I0315 06:44:09.168181   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.168213   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.185832   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0315 06:44:09.186225   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.186731   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.186746   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.187080   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.187327   41012 main.go:141] libmachine: (multinode-763469) Calling .GetIP
	I0315 06:44:09.190501   41012 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:44:09.190971   41012 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:44:09.191006   41012 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:44:09.191155   41012 host.go:66] Checking if "multinode-763469" exists ...
	I0315 06:44:09.191461   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.191506   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.207292   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0315 06:44:09.207691   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.208160   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.208179   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.208511   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.208735   41012 main.go:141] libmachine: (multinode-763469) Calling .DriverName
	I0315 06:44:09.208911   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:44:09.208938   41012 main.go:141] libmachine: (multinode-763469) Calling .GetSSHHostname
	I0315 06:44:09.211684   41012 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:44:09.212046   41012 main.go:141] libmachine: (multinode-763469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:13:9c", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:41:38 +0000 UTC Type:0 Mac:52:54:00:3c:13:9c Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-763469 Clientid:01:52:54:00:3c:13:9c}
	I0315 06:44:09.212076   41012 main.go:141] libmachine: (multinode-763469) DBG | domain multinode-763469 has defined IP address 192.168.39.29 and MAC address 52:54:00:3c:13:9c in network mk-multinode-763469
	I0315 06:44:09.212209   41012 main.go:141] libmachine: (multinode-763469) Calling .GetSSHPort
	I0315 06:44:09.212368   41012 main.go:141] libmachine: (multinode-763469) Calling .GetSSHKeyPath
	I0315 06:44:09.212533   41012 main.go:141] libmachine: (multinode-763469) Calling .GetSSHUsername
	I0315 06:44:09.212665   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469/id_rsa Username:docker}
	I0315 06:44:09.300586   41012 ssh_runner.go:195] Run: systemctl --version
	I0315 06:44:09.307976   41012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:44:09.331317   41012 kubeconfig.go:125] found "multinode-763469" server: "https://192.168.39.29:8443"
	I0315 06:44:09.331346   41012 api_server.go:166] Checking apiserver status ...
	I0315 06:44:09.331386   41012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 06:44:09.347719   41012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0315 06:44:09.360786   41012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 06:44:09.360850   41012 ssh_runner.go:195] Run: ls
	I0315 06:44:09.367234   41012 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0315 06:44:09.373441   41012 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0315 06:44:09.373466   41012 status.go:422] multinode-763469 apiserver status = Running (err=<nil>)
	I0315 06:44:09.373475   41012 status.go:257] multinode-763469 status: &{Name:multinode-763469 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:44:09.373506   41012 status.go:255] checking status of multinode-763469-m02 ...
	I0315 06:44:09.373819   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.373868   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.388819   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0315 06:44:09.389288   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.389772   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.389790   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.390116   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.390403   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetState
	I0315 06:44:09.392414   41012 status.go:330] multinode-763469-m02 host status = "Running" (err=<nil>)
	I0315 06:44:09.392431   41012 host.go:66] Checking if "multinode-763469-m02" exists ...
	I0315 06:44:09.392823   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.392870   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.407822   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I0315 06:44:09.408324   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.408790   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.408812   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.409105   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.409346   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetIP
	I0315 06:44:09.412458   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | domain multinode-763469-m02 has defined MAC address 52:54:00:b8:cd:4b in network mk-multinode-763469
	I0315 06:44:09.412907   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cd:4b", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:42:43 +0000 UTC Type:0 Mac:52:54:00:b8:cd:4b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-763469-m02 Clientid:01:52:54:00:b8:cd:4b}
	I0315 06:44:09.412947   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | domain multinode-763469-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:b8:cd:4b in network mk-multinode-763469
	I0315 06:44:09.413257   41012 host.go:66] Checking if "multinode-763469-m02" exists ...
	I0315 06:44:09.413577   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.413613   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.428882   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0315 06:44:09.429298   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.429788   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.429806   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.430143   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.430351   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .DriverName
	I0315 06:44:09.430525   41012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 06:44:09.430574   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetSSHHostname
	I0315 06:44:09.433210   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | domain multinode-763469-m02 has defined MAC address 52:54:00:b8:cd:4b in network mk-multinode-763469
	I0315 06:44:09.433613   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:cd:4b", ip: ""} in network mk-multinode-763469: {Iface:virbr1 ExpiryTime:2024-03-15 07:42:43 +0000 UTC Type:0 Mac:52:54:00:b8:cd:4b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-763469-m02 Clientid:01:52:54:00:b8:cd:4b}
	I0315 06:44:09.433643   41012 main.go:141] libmachine: (multinode-763469-m02) DBG | domain multinode-763469-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:b8:cd:4b in network mk-multinode-763469
	I0315 06:44:09.433789   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetSSHPort
	I0315 06:44:09.433971   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetSSHKeyPath
	I0315 06:44:09.434124   41012 main.go:141] libmachine: (multinode-763469-m02) Calling .GetSSHUsername
	I0315 06:44:09.434267   41012 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18213-8825/.minikube/machines/multinode-763469-m02/id_rsa Username:docker}
	I0315 06:44:09.516511   41012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 06:44:09.531415   41012 status.go:257] multinode-763469-m02 status: &{Name:multinode-763469-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0315 06:44:09.531455   41012 status.go:255] checking status of multinode-763469-m03 ...
	I0315 06:44:09.531771   41012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 06:44:09.531834   41012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 06:44:09.547364   41012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0315 06:44:09.547837   41012 main.go:141] libmachine: () Calling .GetVersion
	I0315 06:44:09.548286   41012 main.go:141] libmachine: Using API Version  1
	I0315 06:44:09.548306   41012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 06:44:09.548627   41012 main.go:141] libmachine: () Calling .GetMachineName
	I0315 06:44:09.548820   41012 main.go:141] libmachine: (multinode-763469-m03) Calling .GetState
	I0315 06:44:09.550405   41012 status.go:330] multinode-763469-m03 host status = "Stopped" (err=<nil>)
	I0315 06:44:09.550422   41012 status.go:343] host is not running, skipping remaining checks
	I0315 06:44:09.550430   41012 status.go:257] multinode-763469-m03 status: &{Name:multinode-763469-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 node start m03 -v=7 --alsologtostderr
E0315 06:44:21.071461   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-763469 node start m03 -v=7 --alsologtostderr: (27.389060285s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-763469 node delete m03: (1.758147169s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (176.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-763469 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 06:54:21.072263   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 06:54:58.532119   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-763469 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m56.217378041s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-763469 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (176.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-763469
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-763469-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-763469-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.693736ms)

                                                
                                                
-- stdout --
	* [multinode-763469-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-763469-m02' is duplicated with machine name 'multinode-763469-m02' in profile 'multinode-763469'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-763469-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-763469-m03 --driver=kvm2  --container-runtime=crio: (43.038393372s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-763469
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-763469: exit status 80 (227.451163ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-763469 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-763469-m03 already exists in multinode-763469-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-763469-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.21s)

                                                
                                    
x
+
TestScheduledStopUnix (116.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-091041 --memory=2048 --driver=kvm2  --container-runtime=crio
E0315 06:59:58.532513   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-091041 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.001550709s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-091041 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-091041 -n scheduled-stop-091041
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-091041 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-091041 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-091041 -n scheduled-stop-091041
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-091041
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-091041 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-091041
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-091041: exit status 7 (76.09712ms)

                                                
                                                
-- stdout --
	scheduled-stop-091041
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-091041 -n scheduled-stop-091041
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-091041 -n scheduled-stop-091041: exit status 7 (75.196891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-091041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-091041
--- PASS: TestScheduledStopUnix (116.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (183.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3359695981 start -p running-upgrade-522675 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0315 07:04:58.532733   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3359695981 start -p running-upgrade-522675 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m25.513876232s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-522675 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-522675 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.903704652s)
helpers_test.go:175: Cleaning up "running-upgrade-522675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-522675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-522675: (1.208805081s)
--- PASS: TestRunningBinaryUpgrade (183.75s)

                                                
                                    
x
+
TestPause/serial/Start (124.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-082115 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-082115 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m4.503066321s)
--- PASS: TestPause/serial/Start (124.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-636355 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-636355 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.433622ms)

                                                
                                                
-- stdout --
	* [false-636355] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:01:57.549690   47537 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:01:57.549916   47537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:57.549929   47537 out.go:304] Setting ErrFile to fd 2...
	I0315 07:01:57.549935   47537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:57.550568   47537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-8825/.minikube/bin
	I0315 07:01:57.551330   47537 out.go:298] Setting JSON to false
	I0315 07:01:57.552208   47537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6214,"bootTime":1710479904,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 07:01:57.552273   47537 start.go:139] virtualization: kvm guest
	I0315 07:01:57.554577   47537 out.go:177] * [false-636355] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 07:01:57.555906   47537 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:01:57.555943   47537 notify.go:220] Checking for updates...
	I0315 07:01:57.557282   47537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:01:57.558825   47537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	I0315 07:01:57.560092   47537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	I0315 07:01:57.561332   47537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 07:01:57.562617   47537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:01:57.564312   47537 config.go:182] Loaded profile config "kubernetes-upgrade-294072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0315 07:01:57.564401   47537 config.go:182] Loaded profile config "offline-crio-314098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:01:57.564496   47537 config.go:182] Loaded profile config "pause-082115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 07:01:57.564580   47537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:01:57.601240   47537 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 07:01:57.602786   47537 start.go:297] selected driver: kvm2
	I0315 07:01:57.602809   47537 start.go:901] validating driver "kvm2" against <nil>
	I0315 07:01:57.602820   47537 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:01:57.604817   47537 out.go:177] 
	W0315 07:01:57.606279   47537 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0315 07:01:57.607593   47537 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-636355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-636355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-636355"

                                                
                                                
----------------------- debugLogs end: false-636355 [took: 3.003570027s] --------------------------------
helpers_test.go:175: Cleaning up "false-636355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-636355
--- PASS: TestNetworkPlugins/group/false (3.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.783109ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-254279] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-8825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-8825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (113.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254279 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254279 --driver=kvm2  --container-runtime=crio: (1m53.500265433s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-254279 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (113.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (188.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3103485216 start -p stopped-upgrade-691560 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3103485216 start -p stopped-upgrade-691560 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.964289168s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3103485216 -p stopped-upgrade-691560 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3103485216 -p stopped-upgrade-691560 stop: (2.1416307s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-691560 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-691560 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.651730999s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (188.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.504550352s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-254279 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-254279 status -o json: exit status 2 (238.356159ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-254279","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-254279
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-082115 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-082115 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.413260156s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0315 07:04:04.122131   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 07:04:21.071468   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254279 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.231386303s)
--- PASS: TestNoKubernetes/serial/Start (28.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-254279 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-254279 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.240151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-254279
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-254279: (1.386998544s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-254279 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-254279 --driver=kvm2  --container-runtime=crio: (44.164264675s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-082115 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-082115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-082115 --output=json --layout=cluster: exit status 2 (254.006353ms)

                                                
                                                
-- stdout --
	{"Name":"pause-082115","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-082115","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-082115 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-082115 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-082115 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-254279 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-254279 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.863805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-691560
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-691560: (1.152946437s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (110.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m50.635789841s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (110.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-128870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0315 07:09:21.071423   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 07:09:58.532048   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-128870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m3.420078439s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-709708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b282045-a318-4c18-b006-d7e8056ed790] Pending
helpers_test.go:344: "busybox" [2b282045-a318-4c18-b006-d7e8056ed790] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2b282045-a318-4c18-b006-d7e8056ed790] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003784947s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-709708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-709708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-709708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107444297s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-709708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2c1b4bc-2e61-4879-8e9c-d3322664b46f] Pending
helpers_test.go:344: "busybox" [e2c1b4bc-2e61-4879-8e9c-d3322664b46f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e2c1b4bc-2e61-4879-8e9c-d3322664b46f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003772786s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-128870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-128870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077022297s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-128870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-184055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-184055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m54.142567955s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (680.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m19.884333267s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709708 -n embed-certs-709708
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (680.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-128870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-128870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m0.432375569s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-128870 -n default-k8s-diff-port-128870
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-184055 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb] Pending
helpers_test.go:344: "busybox" [3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ebfb8a8-6b7a-4573-9200-29d06ab0e2fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00444163s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-184055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-184055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-184055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-981420 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-981420 --alsologtostderr -v=3: (1.427422837s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-981420 -n old-k8s-version-981420: exit status 7 (75.501992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-981420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (434.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-184055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0315 07:19:21.071329   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
E0315 07:19:58.532396   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
E0315 07:20:44.123207   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-184055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (7m14.682156393s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184055 -n no-preload-184055
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (434.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-027190 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-027190 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (57.971364703s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.174479858s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m26.144957787s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-027190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-027190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187136121s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-027190 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-027190 --alsologtostderr -v=3: (10.830404742s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-027190 -n newest-cni-027190
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-027190 -n newest-cni-027190: exit status 7 (84.964934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-027190 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (69.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-027190 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0315 07:39:21.071543   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/functional-380088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-027190 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m8.771217807s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-027190 -n newest-cni-027190
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (69.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-636355 replace --force -f testdata/netcat-deployment.yaml: (2.220865961s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b26qk" [ecdf7f6f-a1ad-4de0-84d2-e08b09bbfa76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b26qk" [ecdf7f6f-a1ad-4de0-84d2-e08b09bbfa76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004431599s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (33.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-636355 exec deployment/netcat -- nslookup kubernetes.default
E0315 07:39:58.532369   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/addons-480837/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-636355 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.185091893s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-636355 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-636355 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.195620948s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (33.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-027190 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-027190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-027190 -n newest-cni-027190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-027190 -n newest-cni-027190: exit status 2 (268.903609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-027190 -n newest-cni-027190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-027190 -n newest-cni-027190: exit status 2 (266.233054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-027190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-027190 -n newest-cni-027190
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-027190 -n newest-cni-027190
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ztqj9" [bc1cb5ae-2270-4814-aef9-557ac9b2f1da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006156316s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.702443082s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4n7bg" [1d4173c2-582f-4c04-b347-578db8d2cff3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4n7bg" [1d4173c2-582f-4c04-b347-578db8d2cff3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004459219s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.255777467s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0315 07:40:44.303755   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:40:44.944574   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:40:46.225466   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:40:48.785689   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:40:53.906744   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:41:04.147402   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:41:24.627971   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
E0315 07:41:37.141860   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.147111   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.157441   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.177748   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.218048   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.298401   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.458851   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:37.779262   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:38.419522   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
E0315 07:41:39.700338   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m30.079000728s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0315 07:41:47.382338   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.263930989s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p5psk" [57738aec-dd7a-4cef-aa7d-07c062515369] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007160591s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c78tf" [59239282-f2d0-4a7a-a044-15dd971877ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 07:41:57.622963   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c78tf" [59239282-f2d0-4a7a-a044-15dd971877ce] Running
E0315 07:42:05.588921   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004578956s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-28k7w" [70c339c0-88ea-4f3b-ac65-60b00b0e69ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-28k7w" [70c339c0-88ea-4f3b-ac65-60b00b0e69ef] Running
E0315 07:42:18.103225   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/old-k8s-version-981420/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.201296572s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6v56g" [d8cf0e04-80d9-442f-ba71-1774c5241c70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6v56g" [d8cf0e04-80d9-442f-ba71-1774c5241c70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004616928s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-636355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.01339368s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-696vj" [0341089a-9d7e-4ed6-9587-2b0729a67265] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005082619s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7hdfv" [581f07fb-372c-4569-a4b4-2f803c0a84b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7hdfv" [581f07fb-372c-4569-a4b4-2f803c0a84b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004578383s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-636355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-636355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t4vkm" [c0ae51ab-344a-4e8b-91db-07e4d3eef289] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 07:43:27.509782   16075 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-8825/.minikube/profiles/default-k8s-diff-port-128870/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t4vkm" [c0ae51ab-344a-4e8b-91db-07e4d3eef289] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003969173s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-636355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-636355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.2
267 TestNetworkPlugins/group/kubenet 3.24
277 TestNetworkPlugins/group/cilium 3.6
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-901843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-901843
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-636355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-636355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-636355"

                                                
                                                
----------------------- debugLogs end: kubenet-636355 [took: 3.081190773s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-636355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-636355
--- SKIP: TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-636355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-636355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-636355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-636355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-636355"

                                                
                                                
----------------------- debugLogs end: cilium-636355 [took: 3.449048278s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-636355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-636355
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
Copied to clipboard